[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article
Open access

Qualitative Research of Robot-Helping Behaviors in a Field Trial

Published: 14 June 2024 Publication History

Abstract

During the previous field study with a robot and its interaction with mall visitors, we observed a surprising event during which a leaflet-distributing robot was abused, although it was subsequently helped by one of its previous abusers. After analyzing 72.25 hours of video data, we identified 47 cases where a robot dropped a leaflet and classified them according to following three criteria: 1) interaction between the potential helper or others with the robot before it dropped the leaflet, 2) the nature of the interaction (abused or not), and 3) whether it was helped. Using the Trajectory Equifinality Model (TEM), we analyzed 19 cases where the robot was helped. We identified the following interaction process that started with individuals who paid attention to the robot, whether they had abusive or non-abusive interactions with it, whether they noticed its failure, and finally whether they helped it. The presence of others encouraged the person to focus on the robot, and the interactions with it led to helping, regardless whether the interaction was abusive. The absence of others when the robot dropped the leaflet encouraged helping. The findings of this study will motivate interaction designs for social robots that can leverage human help.

1 Introduction

Even after robots have become more completely integrated into human society in real life, they still require help from humans. Robots that can leverage human assistance will have more potential than those that cannot [1]. Human-robot interaction is not only one-way, where robots assist humans; it is also two-way, where humans assist robots [2]. A robot can be a system that builds a single ecosystem [3] that interacts with humans, both as a human environment and as a subject of action. When a robot is in a difficult situation, it may be able to perform its intended role with the help of a person. For example, if the delivery robot cannot press the elevator button and cannot get to the destination floor, a person could press the elevator button and help it get to the destination floor. This would result in a benefit to the person who introduced the robot. We believe that such a robot that elicits help from humans can be a critical tool for the practical use of robots in future societies.
On one hand, situations where a robot asks for help have been well studied, mainly in laboratory settings. Studies that investigated factors on the robot side revealed that a robot is more likely to be helped when it is friendly and perceived to be autonomous [1]. Studies that investigated the factors on the human side identified the importance of cultural context [4] and availability (when people are not busy) for helping robots [5]. Such studies examined factors that encouraged people to offer assistance when robots asked for help. Another study set up interactions in a wild environment where a robot asked passersby for directions on a public road and received them [6].
On the other hand, little is known about human spontaneous-helping behavior with robots because such studies are generally more fruitful in actual wild environments where not only the interaction between the robot and the potential helper but also their interaction with the environment (e.g., the group dynamics with the people around them) influence the act of helping the robot. Latane & Darley [7] identified a process through which people arrive at helping behavior. However, the actual process is unknown that encourages people to assist a robot as well as what factors facilitate it. A wild environment presents a mixture of interactions between various factors and the robot. If we were to capture such interactions in a time series, we might make new discoveries.
The characteristics of those who are more likely to engage in helping behavior include a prosocial personality and strong empathy, which is one of the most obvious prosocial traits [8]. Given that empathy is the tendency to understand another person's psychological state from his/her perspective and to experience the emotional responses of others [9], compassionate people are more likely to engage in helping behavior.
During our observations in a wild setting, children were teasing and obstructing the robot. Then we observed a remarkable, counterintuitive event: a boy who had previously abused the robot subsequently helped it. A boy who had just teased the robot picked up a flyer accidentally dropped by the robot and tried to return it. We were shocked that he could express compassion by suddenly offering help. This event highlighted the importance of understanding a process rather than a stand-alone interaction of abuse or assistance. Robot abuse is generally viewed as a problem that must be “stopped” or “prevented.” However, if a person who abused the robot subsequently helps it, then perhaps robot abuse is not something that should be completely eradicated. Or perhaps, more accurately, the idea of robot abuse must be reexamined in a different light.
Having observed an abusive child who later helped a robot, instead of pursuing the complete elimination of robot abuse, we should design a robot that can interact with its abusers and perhaps convert them into future helpers. We became interested in the chronological involvement of those who spontaneously helped a robot. Robot abuse is an interaction process among a robot, an abuser, and bystanders [10]. Is helping a robot a similar process? We conducted an exploratory analysis of situations in a wild setting in which robots received assistance to determine what interaction occurred among humans, robots, and their environment. We first categorized the situations with a qualitative analysis method called TEM to explore the interaction from the time the person who engaged in the helping behavior met the robot to the time he/she engaged in the helping behavior. Then we generated a process model of helping behavior by integrating all of the models.

2 Related Works

2.1 Prosocial/Helping Behavior among Humans

Helping behavior, which has been studied in both social and developmental psychology since the late 1960s, is also called prosocial behavior or altruism.
Social psychology researches on helping behavior have clustered around work on interpersonal processes (such as the relationship between potential helpers and those who were helped) to answer why people assist others [11]. Latane & Darley [7] presented the following 5-step decision-making model that describes the process by which a person takes action to help: 1) recognizing a problem, 2) interpreting it as an emergency and identifying another's need, 3) deciding whether one has a responsibility to act, 4) recognizing whether one knows how to help, and 5) deciding whether to help.
In developmental psychology, the process of prosocial behaviors can be divided into three stages [12]. The first stage, which is attention to the demands of others, is influenced by the level of social cognitive development, the socialization experiences, the degree of the favor, and interest in others [13]. The second stage is motivational, which includes those who are aware of the demands of others and choose prosocial behavior. This is the prosocial judgment stage, which is influenced by the attribution of causality and empathy [14]. The third stage is the connection between intention and behavior, where prosocial judgment is enacted. This stage is also shaped by self-efficacy and the awareness of prosocial behavior [15]. In any field, helping behavior is a process.
On the other hand, studies have also focused on various factors of helpers, such as personalities, moods, and emotions in helping situations. Personality might exert more influence on prosocial behavior when the situational pressures for providing assistance are weak and its costs are high [8]. Among several prosocial personalities, empathy is considered the foundation for sustained prosociality in society [16, 17]. Bierhoff et al. [18] showed that it has the highest single correlation between prosocial behavior and personality tendencies.

2.2 People Accept Robot Requests

During laboratory experiments, people often help a robot when it seeks assistance. Previous studies identified the factors that influence people to accept requests from robots. These factors, however, seem to vary depending on the situations. One is the relationship between a human and a robot. A pioneering study [19] argued that participants accept requests from a robot if a person with an authority was on the screen on the robot. Some type of relationship must exist between humans and robots in advance to expect that people will heed a robot's request [20]. However, Hüttenrauch and Eklundh [21] reported that about half of their participants, even those encountering a service robot for the first time, helped a robot that requested assistance.
Other factors from the robot's side, such as its form and attitude, seem related to human-helping behavior. Human perception and robot behavior depend on the physical and behavioral cues of robots (e.g., [1, 22, 23]). When comparing humanoid robots and video-displayed robots in collaborative tasks, humans more frequently respond to the instructions of the former: a physically co-present cohort [24]. Fischer et al. [25] found that it is possible to get people's attention and increase their affinity for a robot when it speaks rather than when it beeps. People helped a robot nearly 50% faster if they perceived it to be autonomous rather than being teleoperated [1]. An experiment measured robot animacy and found that an agreeable and intelligent robot maximized its animacy and caused hesitancy in users over the question of switching it off [22]. On the other hand, people's situational factors when helping others also affect such behavior toward robots. Hüttenrauch and Eklundh [21] experimentally reported that a willingness to help a robot fell sharply when they were busy or had other more pressing demands on their time.
Robots that make requests have also been studied in wild settings. A study described how some people accepted requests from a robot and provided directions on public roads [6]. This study also showed that people helped a robot without any prior relationship with it. Fallatah et al. [26] conducted field experiments in which a robot asked about items for sale at a university café and found that many people went beyond just responding to requests and actually anticipated its needs. Such responses were influenced by elements of microculture, including architecture, atmosphere, and the requested items.
Although these previous studies investigated people's helping behavior during requests from robots, they overlooked when, how, and who spontaneously assisted them in distress. The factors may be even different in the case of spontaneous-helping behavior.

2.3 Children's Spontaneous Helping of Robots

In human-human communication, we often see people who aid others without being asked. For example, common prosocial behaviors include picking something up that a person dropped or asking, “can I help you?” of a person who seems to need it. The term spontaneous-helping behavior denotes such unprompted helping behaviors.
Children even help people achieve their goals without verbal requests or physical rewards. Such spontaneous prosocial (helping) behavior is usually described as indiscriminate because a child's prosocial behavior is robust across many aspects of its recipient. A recent study showed that children's helping behavior extends to robots as well. Martin et al. [27] conducted a laboratory experiment with 40 children who witnessed a robot that intentionally/unintentionally dropped an object beyond its reach. The children only returned it when they thought it was accidentally dropped. In other words, the children understood when the robot needed help, and in such situations they were willing to provide it.
If helping behavior is a process, it can be assumed that its factors will vary depending on the stages of the process. One factor at a certain stage in the generation of helping behavior towards a robot is noticing and recognizing when it needs help.
Intuitively, abuse is bad; an abuser is not good. On the other hand, a person who helps another is a good person. However, in fact, in our field trial we observed that the same person abused a robot and subsequently helped it.

2.4 Robot Abuse

Since robots are generally new in most situations and contexts, they continue to attract the attention of many people. In their interactions with robots, most people act appropriately within the bonds of social interaction that have been forged over the years. Unfortunately, some exhibit abusive behaviors, including hitting and teasing a robot. Such conduct is called robot abuse. Robot abuse is much less likely in laboratory experiments, where participants are aware that they are being observed by experimenters. However, after robots were introduced into public spaces, robot abuse began to emerge. Salvini et al. [28] reported that when they demonstrated human-sized robots in public spaces, people closely approached them out of curiosity, and some young people kicked, beat, and hit the robots. They described how the nature of the abuses suffered by the robots resembled bullying behaviors more than vandalism. Brscic et al. [29] observed children who relentlessly disturbed a robot by directing abusive language at it and kicking/hitting it when it served as a shopping mall guide. Of course, vandalism has also been identified that greatly exceeds simple bullying of robots. A notorious episode occurred in America with the HitchBot robot, which was completely destroyed [30] after it had already safely traveled across several other countries [31]. Yamada et al. [10] described how a group of children, stimulated by other children's previous similar behavior, escalated their own behavior from mild to physical forms and finally to such severe abuse as repeatedly kicking, slapping while yelling, and hitting with a stick. Robot abuse includes a wide range of behaviors, from such verbal behavior as teasing and saying nasty things to such physical aggression as hitting, kicking, and vandalizing. Although vandalism may be the ultimate form of abuse, we must first examine abusive behavior from its mild form, because it often escalates from teasing.
The factors that cause such abuse are undoubtedly complex, although some studies have focused on uncovering why it occurs. Robot abuse occurs when a robot is unattended in wild settings [28, 29]. Keijsers and Bartneck [32] concluded that low mind attribution is one source for abusing robotic agents. On the other hand, Nomura et al. [33] interviewed children who abused a robot at shopping centers and reported that their actions were fueled by curiosity or enjoyment. These children perceived the robot as more human-like than a machine and perceived that it has emotions and experiences pain, all of which are attitudes that show that they abused the robot without any hostility.
A recent study investigated ways to suppress robot abuse. After acknowledging the difficulty of getting children to stop abusing, Brscic et al. [29] proposed a behavioral design for a robot that flees from children. Ku et al. [34] similarly showed that children who abused a turtle-shaped robot stopped their abusive behaviors when it crouched (hid) in its shell. However, even though these methods stopped the abuse, they also inhibited the role for which the robots were designed. Alternatively, Connolly et al. studied whether robots can induce prosocial behavior in human adults [35]. In their study, a confederate and a participant engaged in a collaborative task with three robots, and the former abused one of the robots. Their results showed that the study's participants were more likely to prosocially intervene when the other (bystander) robots expressed sadness in response to the abuse as opposed to just ignoring these events. Lucas et al. [36] found that when verbal abuse was directed at a larger robot, participants did not label such behavior as mistreatment, although they did when similar abuse was directed at a child-sized robot. Perhaps a smaller robot is safer from abuse. Even though various studies have suppressed robot abuse, further research is needed to ensure that robots can fulfill their roles without being abused.

3 Data

We used previously collected data from a field trial in a Japanese shopping mall, as reported by Satake et al. [37].

3.1 Robot

The robot used in the research stood 120-cm high with a 40-cm diameter on a mobile platform [37]. It conducted direction-giving and leaflet-distributing services in an area where visitors often browsed around shops and other locations. The robot wore a sign on its chest that said "Information staff" to clearly identify its role to visitors. It was equipped with a printer and could take and distribute leaflets with its left hand.
Although the robot was developed to be autonomous, it used a semi-autonomous approach to speech recognition because the automatic speech recognition (ASR) struggled in the shopping mall. In this system, the operator simply input the language spoken by the user without supplying any further knowledge.

3.2 Situation

The robot wandered around the area, approached visitors, offered a leaflet (Figure 1, left) that included a map of the area, and talked to them. Visitors freely interacted with the robot, which greeted the visitors and initiated its direction-giving service: “May I help you? I can give directions or recommend shops. Where do you want to go?” When visitors asked about specific locations, it provided detailed explanations: “Go straight along this corridor and down the escalator.” It also pointed at them or in specific directions (Figure 1, right). The robot's area of operation was recorded with several cameras in the ceiling and a microphone attached to the robot. During 13 days of recording, the robot operated for 72.25 hours and 8,493 visitors appeared around it.
Fig. 1.
Fig. 1. Distributing (left) and direction-giving (right).
The robot occasionally accidentally dropped its leaflet during its leaflet-distributing service (Figure 2). It was supposed to grab the leaflets when they emerged from its printer. However, since its arm manipulation lacked precision, it sometimes dropped them. From the recorded data, we identified 47 cases where the robot dropped the leaflets in contexts where visitors could have provided assistance.
Fig. 2.
Fig. 2. Robot dropped a leaflet.
This study was approved by the ethical review board of the researchers’ affiliations. All the interactions were in Japanese, and we translated the examples/cases into English.

4 Initial Analysis and Classification of Cases

First, we classified the 47 cases in which the robot dropped leaflets and developed an overview about what kind of interactions occurred during them (identified in Section 3).

4.1 Coding

We repeatedly observed and coded the video data of every case. The factors that emerged included the presence/absence of interaction prior to the helping behavior between the helper and the robot and the presence/absence of a helper. In addition, we categorized the interactions prior to the helping behavior into whether the interaction, if any, was abusive. If a helper was present, such situations were categorized by the type of pre-interaction.
Two independent coders coded each case for the following factors:
Interaction type prior to helping behavior: First, the coders determined whether each case qualified as an interaction. A case was classified as an interaction if it included any visitors who took any actions toward the robot (approaching, talking, abusing, etc.); otherwise it was coded as a no-interaction, even if the robot addressed a passerby who ignored it. Next among the interaction cases, the presence or absence of abuse was determined based on the following definitions [29]:
“Persistent offensive action, either verbal or non-verbal, or physical violence that violates the robot's role or its human-like nature.”
Thus, each case was classified based on the following three categories: an abusive interaction (at least one moment when the robot was mistreated), a non-abusive interaction, or no-interaction.
Helper: If the robot was helped by someone, the types of such visitors who picked up the dropped leaflets were sorted into the following categories: abusers (those who mistreated the robot), users (those who conventionally interacted with it), observers (people who did not interact, including those who just briefly watched the interactions) as well as those who just looked at the robot, i.e., passersby). If no one helped the robot, we labeled it as nobody.
The inter-rater agreement (the kappa coefficient) for the type of interaction was .97. We coded each interaction situation by the type of helper: .86 for abusive interactions, .94 for non-abusive interactions, and 1.00 for no-interaction situations.
Note that in our coding, every person who picked up leaflets tried to return them to the robot; if they acted out of their own self-interest, e.g., to acquire a leaflet, we did not categorize them as helpers. However, we found no such cases. Nor did we observe any other helping behaviors, e.g., efforts to stop the abuse, except for a mother who warned her son, “you'd better stop before you break it,” which failed to deter him; she made no further effort. We did not count her as a helper.

4.2 Findings

Table 1 shows the numbers of people in each category. Among the 47 analyzed cases, 29 had prior interactions with the helping behavior that were either abusive (8) or non-abusive (21). Among those 29 cases, in 23 the visitors were either only children or groups that included children. The remaining 6 cases of interacting visitors were exclusively comprised of adults or teenagers. Of the 8 cases with abusive interactions, 7 were children and one was a teenager.
Table 1.
Prior interaction to the helping behaviorHelpNo helpTotalRatio of help
Abusive interaction71888%
Non-abusive interaction10112148%
No-interaction2161811%
Total192847 
Table 1. Number of Cases in Each Category
Concerning the 8 cases where the robot was abused, the abusers helped the robot in 3 of them, users (other than the abuser) helped in 4, and in no cases did an observer help. Not a single person offered help in one case. Of the 21 cases of non-abusive interaction, the robot was helped in 9 by users, in one by observers, and nobody helped in 11. Of the 18 no-interaction cases, the robot was helped in 2 by observers, and nobody helped in 16. As shown in Table 1, the ratios with which the robot received helped were 88%, 48%, and 11% with abusive, non-abusive, and no-interaction.

5 Qualitative Study of Helping Behavior

In Section 4, we described how some children who abused the robot also helped it. What explanation underlies such a situation? We deepened our understanding by conducting further analysis on the helping behavior toward the robot using a qualitative method.

5.1 Method

5.1.1 Trajectory Equifinality Model (TEM).

We used a qualitative analysis method called the Trajectory Equifinality Model (TEM), which was developed for qualitative time-series data (e.g., interviews about life histories). The TEM concept was proposed by cultural psychologist Valsiner in the context of explaining the multilinearity of development and its potential applications to the human life course [38]. TEM is an analytical method that does not ignore time; it focuses instead on its flow as experienced by an individual. TEM is characterized by its emphasis on the concept of irreversible time and its explicit depiction of time. Sato [39] presented the process of a system's transformation as a set of interacting elements to a certain outcome as a TEM diagram that represents time without discarding it. The actual paths taken by a system as well as alternatives can be shown by diagrams that represent diversity and paths on a time axis [40]. In this study, we analyzed not only the interaction between helpers and the robot but also the chain of interactions between helpers and the environment (others, events, etc.) to explore the factors that facilitate helping behavior on a time axis before a person chooses an action that assists the robot. Therefore, we followed a timeline from the moment a person encountered the robot to when he/she helped it.
TEM is based on the philosophy that various paths serve as the basis for each individual experience, although many reach a common result [10, 41, 42]. It models how people often arrive equally at similar results, called an Equifinality Point (EFP), even though they take different paths. Their path choices are sometimes influenced by others and the events they experienced are called Social Guides (SGs). Branches are often found on a path to an EFP, and these points are called Branch Points (BPs). The TEM method qualitatively identifies these points and social guide events from the data. The behavioral process of helping a robot is not a single path; it might follow multiple paths. This aspect allows us to identify the factors involved in the process that leads to a person deciding to help a robot.
Although TEM was originally developed as a method for analyzing life courses, it can also be used for relatively short interactions and analyzing realistic chains of interactions. An interaction between a robot and a person is not a single exchange; it is a chain of multiple interactions. This method contributes to HRI because it enables an analysis of interactions that cannot just be seen in numerical results, such as whether the frequency of an action increased by one factor.

5.1.2 TEM Procedure.

We first set an EFP as follows.
An Equifinality Point (EFP) leads to an equivalently-finalized result. We set our EFP as a helping behavior, i.e., picking leaflets up and returning them to the robot.
We conducted our analysis based on the basic procedure of the TEM method:
Transcribing: The video data were coded and transcribed into detailed actions (e.g., approaching the robot and stopping behind it on its right side; a person moved her face near the robot and talked to it; she skipped away) and utterances (e.g., “Hello!”, “The robot's talking!”, and “Yes, let's go!”) of every person in the videos, not just the person who helped a robot. We used software called ELAN, which annotates video and audio files with respect to their contents. In addition to annotating them, it also allows users to specify a range of annotations and replay annotated sections later.
Individual modeling: Codes with similar meanings were collected, categorized, and arranged along a time series. We identified such categories as paying attention to the robot, slowly approaching it, running to approach it, watching it, speaking to it, verbally abusing it, touching it, picking a leaflet up, returning a leaflet, and so on. After that, we analyzed the following TEM factors, which are the basic concepts that comprise TEM theory. We re-read the written data and reviewed the video data when needed.
Branch Point (BP): If another consequence is possible, a point emerges at which their choices might diverge. This is called a Branch Point (BP).
A Social Guide (SG) is a relevant event that encourages a choice at a BP that moves it closer to an EFP. We identified SGs based on a qualitative studies process. Researchers experienced with viewing similar interactions repeatedly observed the individual videos of every participant. We categorized the events (the behaviors of others) observed before the BP and extracted the common factors among these events. We judged whether the extracted factors are SGs, i.e., their relevance to the choices made in the BP, in part based on interpreting the events, and in part in comparison with previous studies. Finally, we confirmed the SGs based on triangulation and a consensus with the other researchers who have different types of expertise. In our case, the first author (psychology expertise) identified the SGs based on the above procedure. We based our final confirmation on a consensus among all three authors, where the second and third (experts in informatics) observed and confirmed the video data.
Integrating category models from individuals: We integrated the model from the individuals for each category derived from an initial analysis described in Section 4. We vertically compared and examined the TEM diagrams of each person, arranged similar experiences together, including time series, to create a category group with a higher level of abstraction by combining common categories. At the same time, we repeatedly observed the videos of the participants in the same category. Note that for the comparison purpose, we put similar events, as well as different but mutually exclusive events, vertically aligned in each time series. Such mutually exclusive different events are tagged as experiences that diverge from the BP that will be set up later. After being lined up, common categories are combined to create a more abstract group of categories. The first author, who specializes in psychology and TEM, generated categories that were confirmed by the second and third authors. Core categories with a higher level of abstraction were generated from these category groups, including attention, abuse, noticing the robot's failure, and helping behaviors. For example, the experience of paying “attention” was deemed important, although before that, a period might have existed in which the person was unsure whether to pay attention to the robot or to ignore the one in his field of view. This is a BP. As an SG working at a BP, we categorize similar events from the individual's model as well as the BP. Finally, we created integrated category models by connecting events from a starting event to the EFP.
Integrating models from category models: We created an integrated TEM diagram using the same procedure as above.

5.1.3 Data Selection.

We included 19 cases of helping behavior in the analysis as targets: 13 cases of children (7 boys and 6 girls), 5 of adults (2 males and 3 females) and one female teenager. Based on the types of interaction prior to the helping behaviors and the helpers, we created the following five categories in which helping behaviors occurred:
abusive interaction, abusers helped (Section. 5.2.1);
abusive interaction, users helped (Section 5.2.2);
non-abusive interactions, users helped (Section 5.2.3);
non-abusive interactions, observers helped (Section 5.2.4);
no-interaction, observers helped (Section 5.2.5).

5.2 Findings

Figure 3, which shows the TEM diagram of those who helped the robot, integrated the models of each category obtained from the procedure reported in Section 5.1.2.
Fig. 3.
Fig. 3. Trajectory Equifinality Model (TEM) diagram of people that helped the robot.
Fig. 4.
Fig. 4. An abuser helped the robot.

5.2.1 Abusive Interaction/Abuser Helped.

All of those who helped the robot in this category were children. In Figure 3, it illustrates in bold lines this category as a model of the processes of the children who abused (Figure 4(a)) and later helped the robot (Figure 4(b)) (“Abusive interaction/abuser helped” category). Three BPs are present in the process toward the EFP (helped robot): attention (BP1), abuse (BP2), and noticed robot's failure (BP3). Each BP and each EFP were preceded by an SG that facilitated a choice at this point.
BP1: Attention: At this first BP, we observed that the children were focused on the robot, an action that fueled their interest in it. They decided to approach and start interacting with it.
We identified SG1 as the presence of others. In case example 1, the target child first looked at the other children who were actively interacting with the robot, and then he verbally abused it: “You're snot-nosed,” and tickled its face. When the robot offered to provide a direction, he asked: "Where are the cars?” In another case, the target child watched another child interacting with the robot and approached it: "Oh, a robot!” Thus, in both cases, the target children apparently noticed and became interested in the robot because they saw other children interacting with it. They initially hesitated, although they did have interest in the robot, and began to interact with it only after observing the interactions between the robot and other children, perhaps realizing that it might be fun. Thus, the presence of others encouraged the choice of the target children at BP1 to focus on the robot and raised their interest in it.
#Case example 1
Child 1: “You're snot-nosed” and tickled the robot's face (SG1: presence of others).
Robot: “May I help you? I can guide you around the ATC mall or provide some recommendations.”
Child 1: He stepped back, turned, and looked at the robot.
Target child: “Where are the cars?” He peered more closely at the robot and repeated: “Where are the cars?” (BP1: attention).
Robot: “Where are you going?”
Target: “To where the cars are.”
Robot: “In the parking lot.”
Target: “The parking lot?” He stumbles on purpose, flapping his hands.
Other child: “Injection means...” (Since the nouns for parking and injection are homonyms in Japanese, the child is making a joke.)
Target: He mimics giving an injection to another child and says “bushuuu.”
BP2: Abuse: After briefly interacting with the robot, the target children engaged in abusive interactions with it. We identified SG2 as other children's abuse of the robot. First, we observed that another aggressive child became excited and verbally abused the robot. Then a target child joined the abuse, including verbally abusing it (case example 2). Here we believe that the abuse from others (SG2) encouraged him to start abusing the robot himself. That is, he lashed out to express interest in what would happen if he said such words because he was reassured when the robot did not violently respond to being mistreated by the other children. In another case, after a child's parent warned, "you'd better stop before you break it," the children keep engaged in abusive behavior, and the target child also abused it. Perhaps both the fact that the robot did not become violent and the reassurance that it would not break at this level caused the abusive behavior. While other children relentlessly escalated their verbal abuse, the target children did not, and they actually tried to interact with the robot, such as looking at its face and noticing that it was holding a leaflet, suggesting an interest in various actions.
#Case example 2
Robot: “I can guide you around the mall or recommend particular shops.”
Other child: He brought his face closer to the robot's face and said, “You stink!” (Figure. 5(b)) (SG2: others’ abuse). He repeated: “You stink!”
Fig. 5.
Fig. 5. Scene of case example 1-4: A child who abused the robot and then helped it: (Red arrows indicate the target child who abused the robot, blue arrow indicates another child who abused it before the target child, green oval indicates the dropped leaflet, and the red dotted line indicates the gaze direction of target child).
Target: He crouched down on the robot's right side, looking at its tires.
Robot: “Where do you want to go today?”
Target: “Where the robots are.”
Robot: “Sorry, I only know about this mall”
Target: “That's stupid! You goofball!” He spread his arms out (Figure. 5(c)). (BP2: abuse).
Other child: He stroked the robot's face and said, “Bye-bye dum-dum.” Then he skipped away and waved with his left hand.
BP3: Noticed robot's failure: With an SG3 with robot failure, the robot raised its hand while holding the leaflet but dropped it. The target child sees the robot's mistake. In two cases, the robot dropped the leaflet when it raised its arm to offer it to a target child. All the target children expressed surprise when this happened (case example 3).
EFP: Helping the robot: Finally, we observed the target children who picked up the robot's dropped leaflet (case example 3). This action was facilitated by the fact that the other children were farther away from the robot than the target children (SG4). The presence of others diffused responsibility and inhibited the helping behavior [44]. In another example, no other child was near the robot and the target child. In Figure 5(d), since the other child had left (who was closely interacting with the robot), there was no other child between the robot and the target child, another child was directly behind the target child. The target child probably thought that he was the only one who could help the robot because no other children were available.
#Case example 3
Target: He approached the robot, looked into its face, said “bye-bye,” and knocked the leaflet out of the robot's left hand.
Robot: It dropped the leaflet (SG3: robot failure).
“May I help you?”
Target: “Wow!” (Figure 5(d)). (BP3: noticed the robot's failure) (SG4: absence of other children).
He bent down, picked up the leaflet, and returned it to the robot (Figure 5(e)). (EFP: helped robot).

5.2.2 Abusive Interaction/Users Helped.

Although this is the category where abusive interaction occurred, the targets did not engage in it. All of the people who helped the robot in this category were children. We identified a similar process where BP1/SG1, BP3/SG3, and SG4 are common to the cases in Section 5.2.1: Abusive interaction/abuser helped.
The main difference is that the target children were not abusive in BP2, although they did watch abuse being done by other children. The target children seemed interested in the robot and observed it being abused at close proximity. For instance, another child pushed the robot and lifted its arm (Figure 6(a)). For SG4 before the EFP occurrence (helping behavior), the target child was either the closest child or the only one (Figure 6(b)).
Fig. 6.
Fig. 6. User who only watched abuse helped the robot.

5.2.3 Non-Abusive Interaction/User Helped.

In this category the interaction was not abusive, and the target who engaged in the interaction helped the robot. The targets of this category included both children and adults. BP1/SG1, BP3/SG3, and SG4 were common to the Abusive-interaction/abuser helped cases.
Abuse did not occur at BP2. Yet, the target seemed very interested in the robot and interacted with it. In this scene, the robot and the target had a conversation. When the target asked about the location of a particular store, the robot acknowledged the request, explained how to find the store, and even pointed in its direction (Figure 7(a)). For SG4, the target's siblings were there, but no other person was present in the other groups (Figure 7(b)), or nobody was around the robot.
Fig. 7.
Fig. 7. User engaged in direction-giving service.

5.2.4 Non-Abusive Interaction/Observer Helped.

This is a category where the interaction was not abusive, and the target who helped the robot only observed the interaction. This category's target was a child. The process started like the other categories until BP1/SG1. The target child showed interest in the robot and watched it. However, the process diverged from the previous cases. The target child did not interact with the robot (BP2) and only watched it give a leaflet to the others (Figure 8(a)). Then the target child noticed (BP3) that the robot dropped a leaflet (SG3; Figure 8(b)). When a passerby walked away and no one was around the robot (SG4; Figure 8(c)), she approached the robot, crouched to pick the leaflet up, and returned it (EFP; Figure 8(d)). In this scene, she was interested in the robot, hesitated before approaching it, and observed the interaction between the robot and the others from a short distance. After observing its interaction with others, she approached it when no one else was around.
Fig. 8.
Fig. 8. Observer only watched others interacting with the robot and finally helped it.

5.2.5 No-Interaction/Observer Helped.

The target of this category includes a child and an adult. In this case, the targets who came to this area of the mall were curious about the robot, although they had only observed it from a distance (BP1) (e.g., Figure 9, left). In this example, a girl continued observing the robot as it wandered around (Figure 9, left). She saw it try to give a leaflet to a visitor who ignored it (Figure 9, middle). Then she noticed that the robot dropped it (BP3). She ran up to the robot (Figure 9, right), picked up the leaflet, and returned it (EFP).
Fig. 9.
Fig. 9. Observer who helped the robot.
Like other categories, the targets showed interest and concern for the robot in BP1. But unlike the other categories, they did not take any active action except observing it from a distance. Without SG1, i.e., the presence of others, they would have experienced more difficulty taking such positive actions as approaching or talking to the robot. However, since they continued to pay attention to the robot, when it dropped the leaflet (SG3), they noticed (BP3) and helped (EFP). Like the other categories, when they helped the robot, others were absent (SG4). In these cases, the robot's failure presented an opportunity for the target children to interact with it.

5.2.6 Integration of Each Model.

The TEM models for each of the above five categories were laid out side by side and analyzed based on the procedure described in Section 5.2.1. The results showed that BP1 (attention), BP3 (noticed robot's failure), and BP4 (helped robot) were common branches of every model. BP2 was common in terms of prior interaction, with qualitative differences in its content, abuse, or not abuse. We combined them to create a TEM diagram of the process through which a person helped a robot (Figure 3).

6 Discussion

Since we were surprised to witness children who abused a robot and then helped it, we focused on the process by which a person helped a robot and the factors that facilitated this assisting process.

6.1 Abusers Who Helped Robots

We found that children and a teenager spontaneously helped the robot, even though they previously abused it. That is, we observed that not only children and a teenager who engaged in friendly communication with the robot but also those who yelled at it performed helping behaviors like picking up leaflets dropped by the robot. This counterintuitive phenomenon is confounding. Nevertheless, the target children and a teenager who offered help only engaged in relatively mild abuse. Yamada et al. [10] proposed a four-stage model of robot abuse: approach, mild abuse, physical abuse, and escalation. The target children and a teenager in our study only reached the second stage of this model, i.e., mild abuse; other children escalated to the physical abuse stage. Future work must study whether children who escalate their abuse also engage in helping behavior.
Keijsers and Bartneck [32], in a study with adults, concluded that one cause of bullying is a low estimate of the robot's mentality. In our observations, the robots were sometimes slow to respond or failed to appropriately satisfy the user's intentions. This situation might have fueled the abuse. On the other hand, Nomura et al.’s work [33] with children cited curiosity and interest as reasons. They reported that children who abused the robots said they enjoyed the reactions of the robots. Even though they abused them, they did not intend any harm. Again, children and a teenager abused the robot in our study. Since ours is an observational study, we haven't identified the reason for their abuse and helping. However, from video observations, we inferred that they were curious about the robot, teased it and enjoyed it. This is the same reason provided by the children for their abuse in Nomura et al. [33]. Since we believe they continued to enjoy their interaction with the robot, their mild abuse and helping behaviors are not necessarily contradictory. Perhaps the abuse and helping behavior are caused by mind-attribution, in which the children and a teenager who were curious about the robot see others abuse it and suppose that they can abuse it, too. Perhaps they abuse it slightly more than the others; since it dropped a flyer, they help it because nobody else was around.
The ratio of the robot being helped increased when the interaction was abusive (Table 1: abusive: 88%, non-abusive: 48%, no-interaction: 11%). Our TEM analysis highlighted the importance of attention being paid to the robot in every category of the cases. In the five-step decision-making model of helping behavior [7], the process by which a person reaches a helping behavior is divided into the following five categories: (1) recognizing that something serious is happening; (2) recognizing a crisis; (3) recognizing a responsibility to help; (4) recognizing that you know how to help; (5) choosing to help. In other words, helping behavior starts by paying attention to the person who is being helped. Robot abuse might gather more attention from others to the robot than normal interaction and ironically increase the chance that a robot will be helped.

6.2 What Causes Spontaneous Help for Robots?

People frequently helped the robot when it dropped a leaflet. If a person had dropped a leaflet (e.g., discount coupons), little help from strangers seems likely, although it depends on the situation. People would probably recognize that the person can easily pick it up herself. In other words, the lack of a crisis situation at stage 2 of the decision-making model of helping behavior [7] (described above) discouraged helping behaviors. In contrast, since the visitors in our studies probably felt that a robot might struggle to retrieve a dropped leaflet, they recognized that the situation is critical and provided help.
As discussed in Section 6.1, getting attention is one critical factor for the occurrence of behavior that helps robots. Note also that help was often offered by children. Our results match previous literature where children provided helping behaviors rather indiscriminately to diverse people (e.g., [43]) as well as robots [27] when they noticed such needs. In our study, children spontaneously helped the robot when they noticed its failure.
Moreover, the absence of others around the robot encouraged helping behaviors. As noted above, the presence of others diffuses responsibility and inhibits helping behavior [44]. Since the need for help is apparent from a situation in which the robot dropped its leaflet, we infer that the children who helped recognized that such assistance is their responsibility because no one else was present. In fact, this interpretation was consistent with the data. We assume that the children who helped the robot recognized their responsibility to assist it because no other person was present at stage 3 of the decision-making model [7]. This is consistent with the concentration of responsibility identified by Staub as one helping determinant [45].
Finally, a relationship built upon previous interaction with the robot might also contribute, as reported by previous studies [20, 46]. Children become familiar with robots through prior interaction with them. Perhaps children would be more motivated to assist robots if they interacted more with them, regardless whether the interaction was abusive.

6.3 Why Wasn't the Robot Helped when it was Being Abused?

Although we observed helping behaviors fueled by the robot's failures, i.e., dropping leaflets, we did not observe any helping behaviors toward another apparent problem: robot abuse. In terms of the decision-making model, a person must recognize how she can help at the fourth stage [7]. Since helping behavior in response to robot abuse is more complex than helping a robot that just dropped a leaflet, perhaps the children did not know how to respond to abuse. In fact, helping behavior is less likely to occur in human bullying [47]. Stepping in against abuse and abusers is very risky, since abusers might protest or “fight back,” fueling escalation of the conflict. The potential costs incurred by helping behaviors (e.g., physical danger, money, time, social aspects, etc.) are inhibitory factors (e.g., [45]). In addition, people judge whether the person who seeks help is worthy of such efforts (e.g., [45]). Perhaps the robot was not deemed sufficiently worthy of assistance even during its conflicts with others. In other words, one result of low mental attribution to robots is that they receive less help.
Another work that focused on group dynamics concluded that participants are more likely to intervene during robot abuse when a bystander robot expresses sadness [32]. Unfortunately, our study lacked such group dynamics, i.e., no third-party members who confirmed that the abused robot needed help. If another robot were present, and if it expressed sadness over robot abuse, or if the abused robot actively sought help from a nearby third party, some people might have intervened.

6.4 Implications

Our study's results show that a robot is more likely to be helped if any type of interaction had occurred with it. In other words, designing a robot that attracts interactions with others may lead to a robot that elicits spontaneous-helping behavior from people. Although robots that require human help will increase the burden on humans, robots that can leverage human assistance might have the potential to accomplish much more than those that do not [1]. Such a scenario might eventually increase human convenience.
Our study raises the following question: is robot abuse simply immoral behavior? We revealed that abusive behavior, at least mild abuse by children, is a manifestation of their curiosity and does not necessarily signal hostility to robots. The act of abusing itself is not ethical; abuse should not occur. About half of the children who approach robots in wild settings engage in mild abuse [10]. As more and more robots are introduced into human societies, more robots will be placed in wild settings without human supervision. What this study shows is that we must redirect the spark of interest in robots from abuse to assistance. We envision a new direction upon which to base the design of robots that can lead from mild abuse to helping, as well as a direction that reduces robot abuse.
Nevertheless, finding ways to suppress robot abuse remains critical. From the perspective of bystanders, witnessing abuse is uncomfortable (e.g., [48]). Abuse by one person invites similar treatment from others [10]. If a robot cannot cope with abusive behaviors, it might fall into a chain of abusive behaviors from many people. Robots must be designed so that they can more effectively “defend” themselves against mild-level abusive behaviors. Children who slightly abuse a robot are probably just curious about it, not bullies who dislike robots. Helpful techniques are available, e.g., the bystander robot effect [35]. In future societies when robots inevitably play even more important roles, they will be introduced into our lives and group interactions between people and robots will occur. The effectiveness of such methods based on group dynamics must be tested to determine whether they can indeed reduce robot abuse. We also hope that social cognition toward robots will eventually become more enlightened.

6.5 Limitations

Since all the data in this study were observational, we were unable to examine any of the motivations of the helpers. Future studies must interview those who have been identified as helpers to clarify the causes of their abuse and/or their motivation for helping behaviors. Since our study's dataset was very small, our result must be confirmed quantitatively with a larger dataset. In addition, the helping behavior modeled in our study is "picking up flyers dropped by robots," which is obviously only one of many such potential helping behaviors that can occur in HRI. Different tasks and environments may generate different helping behaviors or perhaps none at all. In fact, in another study, we observed situations where no helping behavior occurred against abuse. In the future, we must verify whether other forms of helping behavior also occur for robots.
Cultural differences also influence helping behaviors toward robots, including those reported in HRI, prosocial, and altruistic behaviors. For example, Korte & Ayvalioglu [49] found that people from rural areas are more prosocial than urban dwellers. Levine et al. [50] conducted a worldwide field experiment on assisting behavior in 23 large cities and found a more than two-fold difference in the occurrence of such behavior among them. The differences in the motivation for altruism have also been studied. Eisenberg and Mussen [51] cite prosocial moral reasoning as a cognitive aspect of altruism, identifying five stages of this development associated with age as well as cultural differences in its development. Our study was conducted in a city shopping center in Japan and reflects urban and/or Japanese culture. We must verify whether similar behavior occurs in other cultures and/or rural areas. Further research on spontaneous human-helping behavior toward robots is desirable.

7 Conclusion

This study clarified the process of human-helping behavior toward robots and the factors that facilitate it. Our qualitative analysis revealed that robot interaction facilitated subsequent helping behavior. We described how our participants started to pay attention to the robot, interacted with it (some of which was even abusive), noticed its failures, and helped it. Our findings suggest that curtailing robot abuse will not be simple. The children who abuse robots might become powerful supporters of them because we believe that robot abuse by children is rooted in strong curiosity. One interesting future direction will be designing a robot that redirects the curiosity of potential abusers to solicit their help.

References

[1]
Vasant Srinivasan and Leila Takayama. 2016. Help me please: Robot politeness strategies for soliciting help from humans. In Proceedings of the 2016 Conference on Human Factors in Computing Systems (CHI 2016), New York, NY: ACM, 4945–4955.
[2]
T. N. Beran, A. Ramirez-Serrano, R. Kuzyk, S. Nugent, and M. Fior. 2011. Would children help a robot in need? International Journal of Social Robotics, 3 (2011), 83–93.
[3]
J. J. Gibson. 1979. The Ecological Approach to Visual Perception. Houghton-Mifflin.
[4]
Markus Bajones, Astrid Weiss, and Markus Vincze. 2017. Investigating the influence of culture on helping behavior towards service robots. In HRI ’17: Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction. New York, NY, USA: Association for Computing Machinery, (Mar. 2017), 75–76.
[5]
H. Huttenrauch and K. S. Eklundh. 2006. To help or not to help a service robot: Bystander intervention as a resource in human-robot collaboration. Interact. Stud. 7 (2006), 455–477.
[6]
Astrid Weiss, Judith Igelsböck, Manfred Tscheligi, Andrea Bauer, Kolja Kühnlenz, Dirk Wollherr, and Martin Buss. 2020. Robots asking for directions: The willingness of passers-by to support robots. In ACM/IEEE Int. Conf. on Human-Robot Interaction, 23–30.
[7]
B. Latane and J. M. Darley. 1970. The Unresponsive Bystander: Why Doesn't he Help? New York: Appleton-Century-Crofts
[8]
Hans Werner Bierhoff. Prosocial Behaviour. Psychology Press, 2002.
[9]
Mark H. Davis. 2018. Empathy: A Social Psychological Approach. Routledge, 2018.
[10]
Sachie Yamada, Takayuki Kanda, and Kanako Tomita. 2020. An escalating model of children's robot abuse. The ACM/IEEE Int. Conf. on Human-Robot Interaction (HRI2020), 191–199, 2020.03. [ ]
[11]
J. F. Dovidio, J. A. Piliavin, D. A. Schroeder, and L. A. Penner. 2017. The Social Psychology of Prosocial Behavior. Psychology Press.
[12]
Nancy Eisenberg, Natalie D. Eggum-Wilkens, and Tracy L. Spinrad. 2015. The development of prosocial behavior. In Oxford Library of Psychology. The Oxford Handbook of Prosocial Behavior, D. A. Schroeder and W. G. Graziano (Eds.). Oxford University Press, 114–136.
[13]
Maayan Davidov, Carolyn Zahn-Waxler, Ronit Roth-Hanania, and Ariel Knafo. 2013. Concern for others in the first year of life: Theory, evidence, and avenues for research. Child Development Perspectives 7, 2 (2013), 126–131.
[14]
Michael Chapman, Carolyn Zahn-Waxler, Geri Cooperman, and Ronald Iannotti. 1987. Empathy and responsibility in the motivation of children's helping. Developmental Psychology 23, 1 (1987), 140–145.
[15]
Angela C. Davis-Unger and S. M. Carlson. 2008. Development of teaching skills and relations to theory of mind in preschoolers. Journal of Cognition and Development 9, 1 (2008), 26–45.
[16]
C. D. Batson, D. A. Lishner, and E. L. Stocks. 2015. The empathy—altruism hypothesis. In The Oxford Handbook of Prosocial Behavior. D. A. Schroeder & W. G. Graziano (Eds.), Oxford University Press, 259–281.
[17]
W. G. Graziano, M. M. Habashi, B. E. Sheese, and R. M. Tobin. 2007. Agreeableness, empathy, and helping: A person × situation perspective. Journal of Personality and Social Psychology 93, 4 (2007), 583–599.
[18]
H. W. Bierhoff, R. Klein, and P. Kramp. 1991. Evidence for the altruistic personality from data on accident research. Journal of Personality 59 (1991), 263–280.
[19]
Yoshinobu Yamamoto, Mitsuru Sato, Kazuo Hiraki, Nobuaki Yamasaki, and Yuichiro Anzai. 1992. A request of the robot: An experiment with the human-robot interactive system Huris. In Proc. of IEEE International Workshop on Robot and Human Communication, 204–209.
[20]
Tetsuo Ono and Michita Imai. 2000. Reading a robot's mind: A model of utterance understanding based on the theory of mind mechanism. Proceedings of Seventeenth National Conference on Artificial Intelligence (AAAI-2000), 142–148.
[21]
Helge Hüttenrauch and Kerstin Severinson Eklundh 2003. To help or not to help a service robot. In The 12th IEEE International Workshop on Robot and Human Interactive Communication (ROMAN). IEEE, Millbrae, CA, USA, 379–384.
[22]
Christoph Bartneck, Michel van der Hoek, Omar Mubin, and Abdullah al Mahmud. 2007. “Daisy, Daisy, give me your answer do!” - Switching off a robot. Proceedings of the 2nd ACM/IEEE International Conference on Human-Robot Interaction, Washington DC, 217–222.
[23]
Mark C. Somanader, Megan Saylor, and Daniel Levin. 2011. Remote control and children's understanding of robots. J. Exp. Child Psychol. 109, 239–247.
[24]
Wilma A. Bainbridge, Justin Hart, Elizabeth S. Kim, and Brian Scassellati. 2008. The effect of presence on human-robot interaction. In Paper Presented at the IEEE Int. Symposium on Robot and Human Interactive Communication (ROMAN'08).
[25]
Kerstin Fischer, Bianca Soto, Caroline Pantofaru, and Leila Takayama. 2014. Initiating interactions in order to get help: Effects of social framing on people's responses to robots' requests for assistance. In Proc. RO-MAN 2014. IEEE, 999–1005.
[26]
Abrar Fallatah, Bohkyung Chun, Sogol Balali, and Heather Knight. 2020. "Would you please buy me a coffee?" How microcultures impact people's helpful actions toward robots. In Proceedings of the 2020 ACM Designing Interactive Systems Conference, 939–950. https://dl.acm.org/doi/pdf/10.1145/3357236.3395446
[27]
Dorothea U. Martin, Conrad Perry, Madeline I. MacIntyre, Luisa Varcoe, Sonja Pedell, and Jordy Kaufman. 2020. Investigating the nature of children's altruism using a social humanoid robot. Compute. Hum. Behav. 104:
[28]
Pericle Salvini, Gaetano Ciaravella, Gabriele Ferri, Alessandro Manzi, Barbara Mazzolai, Cecilia Laschi, and Paolo Dario. 2010. How safe are service robots in urban environments? Bullying a robot. IEEE Int. Symposium on Robot and Human Interactive Communication (RO-MAN2010), 1–7.
[29]
Drazen Brscic, Hiroyuki Kidokoro, Yoshitaka Suehiro, and Takayuki Kanda. 2015. Escaping from children's abuse of social robots. In Proceedings of ACM/IEEE International Conference on Human-Robot Interaction (HRI2015), 59–66.
[30]
Noreen Herzfeld. 2015. Mourning HitchBOT. Theology and Science 13, 4 (2015), 377–378.
[31]
David H. Smith and Frauke Zeller. 2017. The death and lives of hitchBOT: The design and implementation of a hitchhiking robot. Leonardo 50, 1 (2017), 77–78.
[32]
Merel Keijsers and Christoph Bartneck. 2018. Mindless robots get bullied. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, 205–214. https://dl.acm.org/doi/pdf/10.1145/3171221.3171266
[33]
Tatsuya Nomura, Takayuki Kanda, Hiroyoshi Kidokoro, Yoshitaka Suehiro, and Sachie Yamada. 2016. Why do children abuse robots? Interaction Studies 17 (2016), 347--369.
[34]
H. Ku, J. Choi, S. Lee, S. Jang, and W. Do. 2018. Designing Shelly, a robot capable of assessing and restraining children's robot abusing behaviors. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction. 161–162.
[35]
Joe Connolly, Viola Mocz, Nicole Salomons, Joseph Valdez, Nathan Tsoi, Brian Scassellati, and Marynel Vázquez. 2020. Prompting prosocial human interventions in response to robot mistreatment. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, 211–220. https://dl.acm.org/doi/pdf/10.1145/3319502.3374781
[36]
H. Lucas, J. Poston, N. Yocum, Z. Carlson, and D. Feil-Seifer. 2016. Too big to be mistreated? Examining the role of robot size on perceptions of mistreatment. In 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). 1071–1076.
[37]
Satoru Satake, Kotaro Hayashi, and Keita Nakatani, and Takayuki Kanda. 2015. Field trial of information-providing robot in a shopping mall. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2015), 1832–1839.
[38]
J. Valsiner. 2001. Comparative Study of Human Cultural Development. Madrid: Fundacion Infancia y Aprendizaje.
[39]
T. Sato (Ed.). 2009. Starting Qualitative Study using the TEM as a new Method. Tokyo, Japan: Seishin-Shobo. (in Japanese)
[40]
T. Sato, T. Hidaka, and M. Fukuda. 2009. Depicting the dynamics of living the life: The Trajectory Equifinality Model. Dynamic Process Methodology in the Social and Developmental Sciences, 217–240.
[41]
Tatsuya Sato, Yuko Yasuda, Ayae Kido, Saori Takada, and Jaan Valsiner. 2006. The discovery of Trajectory Equifinality Model. Japanese Journal of Qualitative Psychology 5 (2006), 255–275.
[42]
Tatsuya Sato, Yuko Yasuda, Ayae Kido, Ayumu Arakawa, Hajime Mizoguchi, and Jaan Valsiner. 2007. Sampling reconsidered: Idiographic science and the analyses of personal life trajectories. In Cambridge Handbook of Socio-Cultural Psychology, Valsiner, J. and Rosa, A. (Eds.) Chapter 4, Cambridge University Press, 82–106.
[43]
Robert Hepach, Katharina Haberl, Stephane Lambert, and Michael Tomasello. 2016. Toddlers help anonymously. Infancy 22, 130-145.
[44]
John M. Darley and Bibb Latane. 1968. Bystander intervention in emergencies: Diffusion of responsibility. Journal of Personality and Social Psychology 8 (4, Pt.1), 377–383.
[45]
Ervin Staub. 1979. Positive Social Behavior and Morality: Socialization and Development. Vol. 2. New York: Academic Press
[46]
Meredith Allen, Conrad Perry, and Jordy Kaufman. 2018. Toddlers prefer to help familiar people. J. Exp. Child Psychol. 174, 90–102.
[47]
Karin S. Frey, Mirian K. Hirschstein, Leihua V. Edstrom, and Jennie L. Snell. 2009. Observed reductions in school bullying, nonbullying aggression, and destructive bystander behavior: A longitudinal evaluation. Journal of Educational Psychology 101, 2 (2009), 466–481.
[48]
Phoebe Parke. 2015. Is it cruel to kick a robot dog? | CNN Business.
[49]
Charles Korte and Namik Ayvalioglu. 1981. Helpfulness in Turkey: Cities, towns, and urban villages. Journal of Cross-Cultural Psychology 12, 2 (1981), 123–141.
[50]
Robert V. Levine, Ara Norenzayan, and Karen Philbrick. 2001. Cross-cultural differences in helping strangers. Journal of Cross-Cultural Psychology 32, 5 (2001), 543–560.
[51]
Nancy Eisenberg and Paul H. Mussen. 1989. The Roots of Prosocial Behavior in Children. Cambridge University Press.

Index Terms

  1. Qualitative Research of Robot-Helping Behaviors in a Field Trial

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Transactions on Human-Robot Interaction
    ACM Transactions on Human-Robot Interaction  Volume 13, Issue 2
    June 2024
    434 pages
    EISSN:2573-9522
    DOI:10.1145/3613668
    Issue’s Table of Contents

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 14 June 2024
    Online AM: 10 April 2024
    Accepted: 09 November 2023
    Revised: 30 September 2023
    Received: 26 March 2022
    Published in THRI Volume 13, Issue 2

    Check for updates

    Author Tags

    1. Qualitative research
    2. robot-helping behavior
    3. field trial

    Qualifiers

    • Research-article

    Funding Sources

    • JST Moonshot R&D
    • JST, CREST

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 549
      Total Downloads
    • Downloads (Last 12 months)549
    • Downloads (Last 6 weeks)99
    Reflects downloads up to 12 Dec 2024

    Other Metrics

    Citations

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Login options

    Full Access

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media