7.1 Main Findings
The results of the questionnaire filled out by the participants suggest that children had the same perception of the robot as the adults in the online survey and that they understood that the robot was expressing anger when they persistently stood in front of it. Nevertheless, we only found that this had a significant effect on their obstructing behaviour in the case of furious anger, which shows that not all anger expressions are equally effective for preventing hindering.
Many children mentioned curiosity as the reason for obstructing the robot. The eagerness to interact and actions such as lying down and pretending to sleep also suggest that some participants found the interaction to be enjoyable. This matches well with the results of the study of Nomura et al. [
28] where children reported curiosity and enjoyment as the main reasons why they abused the robot.
Furious anger proved to be effective in influencing obstructive interactions; yet, we do not know a definite reason why. We suspect that the ‘scariness’ of the furious robot, which many of the children referred to in their interview responses, may have been one factor influencing the hindering behaviours. However, as we did not explicitly measure the perception of the robot’s scariness, we cannot quantitatively confirm this conjecture.
Meanwhile, we did not find an effect of the perceived intelligence on the children’s hindering. This seemingly contradicts the result from Bartneck and Hu [
6], who reported a significant influence of the robot’s intelligence on robot abuse. That could be due to the difference between the behaviours that were studied, since in [
6] the participants were asked to destroy a robot and not just hinder its movement. But it could also be due to differences between children and adults, as their study was done with 19- to 25-year-old participants.
We found a significant effect of anger expressions on the number of interactions, but not on the duration of interactions. We expected that these two measures would be correlated, and indeed the distributions of the scores (Figure
6) did show a similar trend—e.g., on average, in both cases, the Furious condition scores were lower than in the Normal condition. One possible explanation for why there was no significant effect on the interaction duration is that some of the children seemed not to be deterred by the robot’s anger at all so they engaged in very long interactions regardless of the condition, which can be seen as outliers in Figure
6(b). As a consequence of these outliers, the
F-score was lower and the differences were not significant. But these data also provide a hint that a single solution like expressing anger may not be effective on all children.
In contrast to the children who eagerly interacted, a few were very reluctant to do it. Four of the children spent less than 10 seconds in total over all four conditions standing in front of the robots. This is somewhat similar to the rejection of interaction with robots by some children reported in Shiomi et al. [
41], although they studied younger children. Arguably, such reluctant children would be unlikely to persistently bother the robot even in public, so they would not become the target of robot abuse prevention mechanisms studied in this work.
7.2 Is It Acceptable for Robots to Show Anger?
While earlier works reported that robot abuse was difficult to stop no matter what the robot did [
10,
51], our study has shown that expressing anger is more effective as a signal of mistreatment and as a deterrent from further escalation of abuse. Nonetheless, anger is generally a negative emotion that may also cause negative reactions in the recipient of anger [
32], so it is essential to consider how appropriate it is for robots to use expressions of anger towards people.
One argument for the use of anger is that it may be very important to prevent serious robot abuse. In particular, exhibiting or experiencing such negative behaviour towards robots could desensitise people to negative behaviours towards humans [
15], which Axelsson et al. [
2] refer to as ‘behaviour enforcement’. For example, if robots are employed in educational settings such as schools, where bullying tends to be a serious problem, letting children abuse a robot could have a detrimental effect and encourage children’s bullying behaviour. In such cases, a limited expression of anger by the robot solely for the purpose of stopping the abuse might be acceptable, provided that parents and teachers in that school agree to that.
If the use of anger is adopted, it would be important to carefully design it. In particular, unnecessary scaring of children should be avoided if possible. In our experiment, the robot was relatively quick to switch from normal behaviour to mild anger, and later to strong anger. This was done on purpose to keep the experiment from becoming excessively long. However, in an actual application, this change should happen slowly, such that the robot gets angry only at children who are persistently ignoring its polite requests. If other potentially successful strategies for stopping the obstruction are available, it might be better to try them first and only resort to anger if nothing else works.
Conversely, it could also be argued that expressing anger or other negative emotions with robots and potentially scaring or hurting people should simply never be allowed, even if that is the best way to stop robot abuse. Instead, people should be educated not to abuse robots, in a similar way to how we teach children not to mistreat animals or harass people at work (we could even consider legally banning violence against robots [
26]).
The stance that robot anger should never be permissible may appeal to many people, and it is interesting to ponder why. We could compare it with the case of animals. If we were to continually poke and prod a dog or some other animal, we would think it natural that it would become angry with us. Anger has evolved in living things as a means of self-preservation [
40], and we accept it as a matter of course. In the case of robots, the argument could be that as artificial objects they should not show anger. On the other hand, there is a multitude of examples of anthropomorphised robots, to which developers have given artificial emotions and other biological characteristics. One might wonder then why should anger be different.
The question could also be considered from the viewpoint of ethics and the debate on the moral standing of robots. First, there is the question of robots as ‘moral patients’, i.e., whether the fact that robots are being mistreated even merits our concern. While one possible position is that robots are just machines and can thus be treated like slaves [
11], there are several ways how it can be argued that robots do have moral standing, if not in themselves, then at least indirectly [
13]. For example, in the case of robot abuse, one could argue that abuse is wrong and the robot should have moral standing because the display of abusive behaviour in public can cause distress to bystanders and also negatively affect the person causing the abuse, or that interfering with the robot’s task can cause harm to society at large [
4].
With a higher status of robots as moral patients, perhaps a robot that expresses anger would be naturally acceptable. Yet, expressing anger also entails trying to affect and change human moral behaviour on purpose, thus in effect robots become active ‘moral agents’. This in turn leads to the open questions of who should decide how and when a robot should exercise its moral actions and the responsibility for the consequences of such actions [
13]. For example, who should be held responsible if a robot’s angry expression makes a child excessively frightened?
Considering all of the above, we believe careful debate is still needed on whether robots should express anger at all, and if yes, in which situations this may be adequate.
7.3 Limitations
The study was done in a laboratory environment as this gave us better control of the conditions during the experiment than what can be achieved in a real-world study. In particular, we were able to study a single child interacting with the robot, whereas in the real world a lot of the time the interactions typically involve more children. However, an obvious limitation of in-lab experiments is that many of the experimental conditions are inevitably artificial. For example, children might not consider a robot patrolling a room to be doing an important task. While one participant said in the interview that the robot truly looked like it was patrolling, two participants said it did not (‘It said it was patrolling, but the room was small and it felt like it was playing around’.).
Repeating the interactions four times could also have felt awkward and caused the children to behave differently between conditions. However, we did not find any conspicuous behavioural changes in the repeated conditions, apart from the children requiring fewer explanations and being quicker to start interacting. We also believe that counterbalancing the order of conditions reduced the possible effects of ordering on the results.
The presence of a confederate in the room may have been felt as unnatural and potentially affected the interaction. In connection with that, one could question whether the experiments truly reproduced robot abuse at all. In particular, one worry is that the children doing the abuse were simply imitating the behaviour of the confederate. (The experiment setup reminds to some extent of the classical Bobo doll experiments by Albert Bandura, where children were shown to copy aggressive actions towards a doll after seeing adults doing them first [
3].) While this is an important criticism, it should be noted that a similar social learning process was observed to be happening also during the real-world robot abuse, where children frequently started abuse only after seeing some of their peers doing it [
51]. Moreover, analysis of the videos from our experiment also showed that once they were left alone most children paid almost no attention to the confederate (who pretended to ignore them). This could suggest that the children, on the whole, felt little pressure to do the obstructions just because they were asked to. Nevertheless, it would be important to confirm the obtained results in the case of naturally happening robot abuse in a real environment, which we will consider exploring in future work.
The work was done in Japan, with Japanese children and with one specific type of robot. It is possible that in a different culture and with a different setup the outcome would differ. It is also likely that parts of the robot’s utterances and behaviours would need to be adjusted. Moreover, it is not clear whether the same three anger types would be identified in other cultural settings. The age of the children in the experiment was also limited, and the results may not generalise to other age groups.