1 Introduction
Social robots have been introduced to public or semi-public spaces to work on behalf of humans by leveraging such potential benefits as providing human-like services, enhancing specific atmospheres (e.g., improving the attractiveness and enjoyability of stores and shopping malls), and serving as inexpensive labor [
31,
33,
40]. Robots are expected to support human workers by undertaking labor-intensive, repetitive, stressful, and dangerous tasks. Researchers have extensively explored the potential services that social robots can offer in public spaces [
2,
21,
29,
42]. One such service is regulating visitor behaviors. However, few studies have investigated the potential of robots to deliver such services in public settings [
13,
27,
38].
Regulating the behavior of visitors in public places is crucial for maintaining smooth operations and ensuring a civil and safe atmosphere. In such places, security guards, police officers, shopworkers, and receptionists play a role in regulating people when necessary. They ensure that visitors adhere to specific rules, such as refraining from using their phones while walking in crowded areas, avoiding smoking where prohibited, and not bringing prohibited items into stadiums. When violations occur, these employees reprimand the involved individuals. Sometimes, the mere presence of staff will discourage inappropriate behavior. However, attempting to regulate the actions of strangers is often stressful and might even put human employees at risk. If robots can assist in regulating visitors on behalf of human employees, the workload of the latter cohort can be significantly eased, leading to an enhancement of their overall work experience.
Applying a robot to regulate people’s behavior in public spaces is challenging. Many issues raise doubts about the effectiveness of such robots: their low social power, people’s lack of respect toward them [
49], and the tendency to disregard their admonishments [
27,
38]. Some individuals perceive as less likable and potentially unsafe robots that exhibit controlling behaviors, including admonishing and punishing [
22]. This negative perception among people might fuel a public backlash against utilizing robots to regulate individuals in public environments [
30]. Consequently, when designing a robot that is intended to regulate people in the real world, researchers must carefully consider both the effectiveness and the social acceptability of their designs.
In this study, our objective was to specifically design a robot for regulating people in public spaces. We chose to focus on managing queues because they present a novel potential application for robot services. Long queues are frequently observed at such public events/settings as concert halls, stadiums, movie theaters, and airports. The staff at these locations must ensure that individuals are properly lined up and remind them to refrain from engaging in inappropriate behaviors that may disrupt the queue or disturb others, such as queue jumping or obstructing forward movement. In Japan, security guards often handle queue-management responsibilities, including guiding visitors to the end of the queue, making announcements, and monitoring/addressing inappropriate conduct. Unfortunately, for human workers, managing queues can be a monotonous and tiresome task.
Our aim is to develop a robot capable of effectively managing queues while simultaneously gaining societal acceptance, resulting in people who will follow its guidance with minimal resistance. To achieve this goal, we seek to identify a design that satisfies these criteria. Hence, our first research question:
RQ1: How can we develop an acceptable and effective robot for regulating people in public spaces?
Our approach involved learning how to design a robot based on the effective and accepted role of a human security guard in society. Security guards possess a high level of social power and legitimacy and garner much greater compliance from the general public compared to generic citizens [
5]. Since a crucial aspect of their role is embodied in their professional image, we imbued our robot with the appearance of a professional security guard. We anticipate that such a design will enhance the robot’s social power, facilitate people’s understanding of its role, and improve the acceptance and compliance to its requests.
To create a professional impression for our robot, we incorporated the following three key features associated with a security guard’s image: duties, professional behavior, and professional appearance. We conducted interviews with three guards with experience in queue-management services to gain insights into their duties. Based on our findings, in our robot, we implemented ushering, admonishing, question answering, and making announcements services and modeled its ushering behavior on that of a professional guard. We designed a customized guard’s uniform for our robot to enhance its professional appearance.
Moreover, we faced uncertainty regarding the acceptance of a regulatory robot’s service (i.e., a queue-managing robot) in real-life situations. Unlike robots that provide such friendly and supportive services as guidance, entertainment, and assistance, there is a greater likelihood that people will reject a robot that is attempting to control their behaviors and admonish them for mistakes. The current limited knowledge about regulatory robots does not adequately capture how people perceive them in their everyday lives. It specifically remains unclear how individuals will react if a robot were to admonish them for inappropriate behavior in real-life scenarios and whether such interventions are deemed acceptable. In light of this, we recognized the need to comprehensively investigate how people perceive a robot that is seeking to regulate their behavior in real-world situations. Our aim was to gain deeper insights into the acceptance and reactions of individuals when confronted with a robot’s attempts to control their actions. Consequently, we formulated our second research question:
RQ2: How do people in public spaces perceive a robot that is attempting to control their behaviors?
We addressed our second research question by conducting a 10-day field trial at a children’s amusement event during which our robot autonomously managed a queue of people. During this trial, we conducted semi-structured interviews with both the event staff and the visitors who interacted with the robot and experienced its ushering and admonishing services. Our primary objective was to gain insights into people’s acceptance of the robot, understand their reasons for complying with/disobeying its admonishments, and compare their perceptions of a robot’s admonishment with that from a human. Additionally, we wanted to assess the extent to which our robot could autonomously provide queue-management services in a real-world setting.
The remaining sections of this article are organized as follows.
Section 2 provides an overview of related works in the field.
Section 3 discusses the design considerations that influenced the development of our queue-managing robot.
Section 4 provides a detailed explanation of our robot system itself. A field trial and its results are presented in
Sections 5 and
6.
Section 7 includes a discussion of our findings, and
Section 8 concludes the article.
7 Discussion
7.1 Revisiting Research Questions
The objective of this study was to develop a robot security guard that can manage queues in public spaces. It is challenging to use a robot to control people’s behavior in the actual world because of human reluctance to comply with a robot [
27,
38] and the unfavorable perceptions some possess about such robots [
22]. To overcome these challenges, we formulated our first research question: “How can we develop an acceptable and effective robot for regulating people in public spaces?” We proposed a design that mimicked a human security guard’s role, expecting our design to help people easily understand its role and improve their compliance with and their acceptance of the robot. Our field-trial results showed that it persuaded individuals to comply with its admonishments and requests by acting like a professional security guard and received their acceptance. Although the self-reported number of visitors is limited, the visitors recognized the robot’s role as a security guard or a member of staff due to its uniform and appearance which motivated them to comply with it. In addition, we observed several visitors spontaneously saying and commenting to the experimenters that the robot is working as a staff member just by observing it. However, it’s unclear whether the robot’s uniform caused those visitors to comply with its requests. Further, our interview analysis suggests that visitors cooperated with the robot because they perceived the features of a professional guard in it, such as having sufficient capability for queue management and that its admonishment resembled a human’s admonishment. Thus, imbuing a professional image in a robot is an effective and acceptable design for a regulatory robot.
Our second research question looked into “How do people in public spaces perceive a robot that attempts to control their behaviors?” Most visitors accepted this queue-managing robot that wanted to manage their behaviors in real life, and its requests and admonishments were convincing enough to follow. Their opinion of the robot was influenced by such factors as its capacity to provide a service, the clarity and reasonability of its requests or admonishments, and its attitude. Furthermore, even though some visitors disobeyed it, our result demonstrates that being caught and admonished by a robot in public did not motivate them to confront it or act aggressively. Thus, despite the limited number of interviews with admonished visitors, our field-trial results suggest that people will welcome a regulatory robot service in society.
7.2 Implications
7.2.1 Implications for Design of Robots That Regulate People.
When robots assume authoritative positions in society, they will be required to control the behaviors of others like human professionals do. However, people typically dislike robots that merely attempt to control them through admonishments and punishments [
22]. Such an attitude complicates integrating regulatory robots into society. Our study shows one successful solution to this problem.
Our approach is to exhibit a professional image in the robot, implement admonishing as one functionality among several others, and use admonishments sparingly in unavoidable situations. This design enabled our robot to regulate a crowd in a public space reasonably well where it still received people’s acceptance even though it performed its admonishing functionality. We believe this design created a perception in the people that a regulatory robot provides a reasonable service instead of merely an admonishing machine. We expect that the “exhibiting a professional image” design concept is applicable to contexts where a robot has to perform roles that require a professional image where admonishing is part of its services, such as police officers, managers, teachers, and exam invigilators.
Our research also emphasizes the importance of minimizing admonishing situations and discovering non-confrontational alternatives. Since admonishing creates negative feelings in people and bad impressions of robots, it should be the last option considered by regulatory robots. Robots should try to lower the likelihood of inappropriate behaviors in the first place by clearly and understandably providing its instructions. Our interview findings showed that most visitors are willing to cooperate if they clearly understand the robot’s instructions. On the other hand, confusing instructions increase non-compliance. Another reason for cooperation is predicated on people understanding the robot’s role. Its appearance and behaviors must be designed in a way that makes its role obvious.
Finally, our results imply the potential of deploying robots for regulating people in public spaces. A robot’s unique capabilities, such as its wide sensing ability, its ability to talk to anyone without experiencing social anxiety, and its ability to work long hours will be helpful for its role.
7.2.2 Implications for Design of Robot-Admonishing Functionality.
To begin with, our results suggest that a robot’s admonishment might be effective and acceptable to reduce inappropriate behaviors. This finding implies the potential of using a robot’s admonishments to lower inappropriate behaviors in society, especially for less serious or unintentional behaviors. However, we believe a robot should minimize its use of admonishments since it runs a high risk of fomenting negative impressions.
Furthermore, since an admonishment from a robot is perceived as being easy to accept and less offensive compared to that of a human, robot admonishments could minimize the negative feelings of receivers and such confrontations as arguments and violence that sometimes arise in admonishment scenarios. Therefore, robot admonishments seem especially fruitful for commercial settings like restaurants, shopping centers, and events that are concerned with customer impressions but still need to curb inappropriate behaviors.
Moreover, our interview results showed that a polite and courteous robot is one reason for compliance with and acceptance of its admonishments. People are concerned about the verbal attitude of an admonishing robot, just as in human–human communication. Therefore, a robot with a polite attitude (e.g., using respectful language and a friendly tone of voice) could be a successful approach to achieve an effective and acceptable admonishing service.
Lastly, our study indicates the importance of future studies on developing robots admonishing/instruction giving behaviors suitable for children. Our field study reveals young children (in elementary school aged) were less likely to voluntarily comply to the robot’s requests. As a result, parents seem to act as a mediator and explain the robot’s request to the children and make sure they follow it. This indicates the robot’s admonishing strategy that works for adult could be less effective with children. It would be worthwhile to study the children’s reasons for non-compliance. This could be robot’s speech is not understandable to young children, or robot lacks persuasiveness like an adult. Future studies should also consider to improving robot’s admonishing (or instruction giving) strategies suitable for children. One approach could be modeling the strategies of professionals that dealing with children like teachers or childcare providers.
7.2.3 Admonished vs. Unadmonished Visitor’s Opinions about a Robot-Admonishing Service.
In our field trial, we experienced the rare opportunity to listen to the opinions of some visitors who were admonished by our robot. We roughly compared the opinions of the admonished and unadmonished visitors to gain insight about how being admonished by the robot affected their opinions and the validity of the unadmonished visitors’ expectation in actual admonishing situations. Across all three interview topics about the admonishing service, i.e., obedience, comparisons of feelings, and impressions, the dominant opinions of both groups are similar, with a slight difference in their reasons. Nevertheless, compared to the admonished visitors, more neutral opinions appeared among the unadmonished visitor results due to their lack of familiarity with its admonishing behavior. The following are the details of our comparison for each topic.
Concerning obedience, the majority of unadmonished visitors intended to comply with a robot’s admonishment, consistent with the fact that all the interviewed admonished visitors did obey the robot. Their reasons for obedience included that the admonished visitors tended to talk more about their own actions: “admitting their own mistakes.” Unadmonished visitors equally commented on their intention of admitting a mistake as well as such external influences as the robot’s capability and merits and the presence of others. Furthermore, based on their experiences, the admonished visitors said the robots politely warned them, an outcome that the unadmonished visitors could not imagine due to their lack of exposure to the robot’s admonishing.
In comparing their feelings of receiving a robot’s admonishment with a human’s, unadmonished visitors expressed more positive expectations about a robot-admonishing service. A large majority stated that a robot’s admonishment would be easier to accept than a human’s. However, such a trend is not clear in the results from the admonished visitors. Instead, two dominant opinions emerged: “a robot’s admonishment is easier to accept than a human’s” and “human and robot admonishments feel the same.” It is unclear whether this result reflects the small number (i.e., 6) of admonished visitors in our study or their experiences with a robot’s admonishment. Furthermore, unadmonished visitors expected a human’s admonishment to be more powerful than a robot’s, although none of the admonished visitors gave this opinion. Perhaps the unadmonished visitors imagined a robot’s admonishment for various inappropriate behavior situations, including more serious ones. However, the admonished visitors might have considered their own experiences and felt that such admonishments were powerful enough.
A majority of the unadmonished visitors reported a positive impression of the visitors’ acceptance of the robot’s admonishing service. All the interviewed admonished visitors reported a positive impression of it. Considering their reasons, most of the admonished visitors appeared to believe that a robot-admonishing service resembles a human-admonishing service and claimed to have no particular resistance toward it. The majority of unadmonished visitors commented on the robot’s specific merits.
Based on the above comparison, similar to the majority of unadmonished visitors, the admonished visitors we interviewed had positive impressions of the robot-admonishing service, despite being admonished by a robot. Thus, a robot’s admonishment did not lead to any particular negative impressions from the visitors.
7.2.4 Ethical Implications of Using Security Robots in Public Space.
Our research suggests several ethical implications related to using security guard robots to regulate people’s behavior in public spaces. First, some people might doubt the ability of robots who are still far behind human’s moral and cognitive capabilities to judge human behaviors and admonish. Their doubt could be heightened when considering accepting admonishment from robots, particularly if the people are unaware that they are engaging in inappropriate behaviors. Therefore, it’s necessary to establish trust in security robots and rest assured that they are under human supervision before deploying them in public spaces.
Furthermore, being approached by a security robot in a public space and getting admonished by it could be more unpleasant for some people than being admonished by a human guard. People could be scared due to various reasons such as the unfamiliarity of such machines, the unpredictability of robots’ intentions, and physical appearance. Furthermore, a person could feel ashamed and guilty when a robot who is inferior to them reveals their mistakes. Similarly, robot admonishment in public spaces could be embarrassing for some people, as such incidents can attract more attention from passersby due to the novelty. Therefore, it is important to be careful about people’s feelings when using security robots to regulate people.
Moreover, the presence of a mobile security robot could be scary for small children due to their appearance and movements, and so forth. Therefore, it will be better to design security robots that interact with the general public with a pleasant appearance. Also, even if the robot is completely designed to be safe some parents may concerned about the safety of the children around the robot. It is necessary to improve the perceived safety of such robots.
7.2.5 Challenges during Field Trial and Lessons Learned.
During our field trial in a public event, we met several challenges. Below, we discuss those challenges and our strategies to overcome them.
(1)
Developing the robot system to be robust to various visitor behaviors: One big challenge was developing our robot system to be robust to visitor behaviors. Sometimes, visitor behaviors in the real world are complex and unpredictable. However, the robot should still be able to work reasonably well under such behaviors. During our prototype testing of the robot using hired participants, we realized that while our robot is working well for the ideal behaviors of participants, it is less capable of handling unexpected behaviors (e.g., some visitors may appear to be walking toward the queue to join it, but they intend to talk to the robot, or they just passing through, and some visitors could suddenly leave the queue). We tried to develop an algorithm to recognize visitors’ paths from their trajectories. However, this approach causes a lot of misrecognition. As an alternative, we defined several parameters, performed a series of tests, and tuned them to ensure the robot could handle various visitor behaviors. This testing and tuning process took considerable time and effort.
(2)
Finding a public event for conducting a field trial: Finding a public event for conducting a field trial with the robot and getting approval from event management was challenging and time-consuming. Therefore, it is advisable to start the field trial location searching process in advance. The time for the search process depends on the availability of the events. Once a candidate event is found, it is necessary to negotiate with event management to get their approval. This kind of conversation covers our field trial plan as well as the robot’s service (or providing a demonstration). Important details like the safety procedure during the field trial, our requirements, and the data that is expected to be captured should all be included in the plan. Sometimes, when negotiating, event management may request some changes to the robot and the field trial plan. It is also important to confirm in advance the right management. Especially if the robot is tested in commercial events, certain permissions need to be received to include brand names in the robot’s utterances, costumes, and so forth.
(3)
Deciding the robot’s conversation length to avoid disturbing the robot’s service: Another challenge we faced was determining the appropriate conversation length with visitors so as not to interfere with the robot services. Engaging in lengthy conversations with visitors is not the primary goal for queue-managing robots (and similarly for some other robot services such as patrolling and food delivery). Such long interactions may interfere with the robot’s ability to perform its main duties. Some visitors do, however, wait around the robot and make prolonged attempts to engage with the robot. As a solution, we limited the length of the robot’s conversation. However, while this approach reduced the visitor’s interaction length, it also led to the disappointment of some visitors. Therefore, future works should be considered to study determining the appropriate length of conversation that does not interfere with robot duties and does not disappoint visitors. In addition, It is worth studying appropriate alternative strategies to handle visitors who attempt to engage in long interactions that potentially interfere with robots’ duties.
7.3 Open Questions
7.3.1 Ethical Concerns of Regulatory Robots.
One remaining important open question is the ethical concern of applying regulatory robot services in our future societies. First, it’s unclear whether allowing a robot or a machine to judge human behavior is socially acceptable. In our study, a human operator confirmed the robot’s detection of inappropriate behaviors and gave it permission to make an admonishment. While some might believe that robots have adequate capability for such tasks and can even do them more fairly than humans, a portion is resistance to allowing robots to judge human behaviors. Considering issues of responsibility, we personally believe such decision-making processes should be done by humans or under their supervision.
Second, it is unclear who should take the responsibility for any adverse effects on people due to the controlling behaviors of robots. Our findings show that some people were intimidated and embarrassed when admonished by a robot. If a robot mistakenly admonishes a visitor who did not engage in any inappropriate behavior and if any mental anguish/pain were caused, someone must take responsibility and compensate the injured person. The question of who bears responsibility—robot developers, employers, or another party—remains unresolved.
7.3.2 For Which Contexts Are Regulatory Robots Suitable?
Our findings reveal that the services of regulatory robots are suitable for children’s events attended by families with young school-aged children. It is an open question in which other context robots can perform regulatory tasks on behalf of humans. A robot’s suitability for a particular context depends on many factors, including its effectiveness, its social acceptance, and its value in a specific location.
A robot’s effectiveness in a particular context depends on many aspects, including the task complexity and the nature of the people in the context. We believe a robot will be less effective in situations that demand fast responses, intricate communication skills, and where serious inappropriate behaviors are likely due to technical immaturities and less powerful admonishment skills compared to humans. Furthermore, such visitor characteristics as their intention to cooperate and the ability to understand a robot’s requests will influence its effectiveness. In our case, according to the visitor opinions, it is plausible that parents behaved well around their children as role models, and therefore, they complied with the robot’s guidance. On the other hand, if only adults were present, incidents of ignoring the robot would undoubtedly increase [
38]. Similarly, a robot will be less effective in environments without any adults because children generally show less compliance [
8] unless they are guided by adults.
It’s also crucial to consider whether using a robot for regulatory services in a certain situation is socially acceptable. Since social acceptance is a complex concept, it is preferable to conduct a detailed investigation of potential users’ opinions before deploying a robot in a specific context and only applying it if it is accepted.
In addition, a robot’s value in a particular context determines its suitability. For instance, robots will have a higher value in locations with a majority of children who manifest great interest in them. Furthermore, viewing a robot under a positive light is good for interacting with children because it implies a potential use for robots in such contexts.
7.3.3 For Which Inappropriate Behaviors Is a Robot’s Admonishment Effective?
Our results showed that a simple admonishment from a robot is effective for situations featuring less serious or unintentional inappropriate behaviors. However, it remains unclear what other kinds of inappropriate behaviors robot admonishment can effectively reduce. Compliance with a robot’s admonishment depends on the nature of the inappropriate behavior being prohibited. When people are admonished to act in a certain way, they could feel their freedom to act as they desire is threatened and experience an unpleasant motivational state (i.e., psychological reactance). Such reactance can motivate them to act to restore their freedom through such actions as refusing to comply or to behave in an aggressive way toward those who pose a threat [
44]. The importance of threats to freedom [
6] is one factor that determines the amount of reactance. In other words, how badly do they want to do the prohibited action? In our situation, urging visitors to move ahead and lining up properly were not seen as restrictions on crucial freedom. Thus, they might feel less resistant to complying with a robot’s admonishment. If the importance of the threatening behavior is high, for instance, admonishing a visitor who is simultaneously walking and using a smartphone, perhaps he/she will ignore the robot.
Furthermore, in our field trial, most of the people who made inappropriate behaviors did so unintentionally. Therefore, a warning from the robot helped them realize their mistake and led to corrections. On the other hand, people who intentionally engage in inappropriate behavior tend to trivialize the robot [
38]. We anticipate that a robot’s admonishment will not be forceful enough to prevent those who intentionally engage in serious inappropriate behaviors, such as smuggling prohibited items into a stadium, situations even human security guards struggle to resolve.
7.4 Future Role of Operator
We used a human operator to compensate for the technical limitations of our robot. We believe that with future technological advancements, most of the operator’s duties will become fully autonomous or their performance will be significantly improved. The operator’s effort on one robot will be significantly reduced, enabling him to simultaneously control multiple robots. However, we don’t expect operators to be completely eliminated due to ethical considerations. Currently, the operator performs four types of duties: updating queue area settings, resolving system errors, confirming admonishing targets, and speech recognition.
We expect that the selection of queue area settings will be automated with reasonable accuracy, and speech recognition tasks can be delegated to a robust ASR system. Choosing appropriate queue area settings based on the crowd conditions is currently a primary task of the operator. Although we did not emphasize the automation of this task because resolving such technical difficulties was outside the main thrust of our study, we believe that such functionality can be implemented in robots to update the queue area by detecting the crowd’s condition. Furthermore, speech recognition was entirely carried out by the operator due to poor ASR accuracy in the highly noisy environments of public events. Future ASR systems might perform much better in such noisy environments.
Considering the practical limitations, we believe an operator will still be needed to confirm inappropriate behavior detection and handle errors. First, the goal of admonishing target detection should be absolutely no errors because a robot that admonishes a person without a legitimate reason might lead to a conflict. However, a detection algorithm that achieves 100% accuracy seems impossible in real environments. Even if we could achieve accuracy, some people might not accept the idea of allowing AI to judge human morality. A human operator might still need to make the final judgement [
13]. Second, an operator assistance may be required to fix any system errors since we cannot anticipate an error-free system. Even with our slightly current “crude” technology, the operator spent minimal time on error correction. We believe that based on the future development of technologies, systems will become more robust, which will further reduce the number of operator interventions.
7.5 Limitations
Our study has several limitations. First, we modeled the ushering behavior of only one professional guard. Perhaps other effective strategies could have been incorporated into our study.
Second, the interview results of visitors were affected by the self-selection bias [
20], as they chose to participate in the interviews themselves. Unfortunately, such situations are unavoidable in field studies. Self-selection bias may cause visitors’ interview results to be skewed toward a positive spectrum, running the risk of overly optimistic conclusions if not careful. While generally in a self-selection situation, people who have strong positive or negative impressions [
20] about the robot are expected to participate, our interview results show that the number of negative opinions is relatively low. This could be due to reasons such as the visitors with negative opinions not participating in the interview due to the context of the amusement event (they may hesitate to discuss negative things in an enjoyable context) or a lack of visitors with strong negative impressions. Therefore, our interview results could underrepresent the concerns and negative opinions of such robots. Furthermore, the visitors who accepted the interviews could be those interested in new technology and the robot itself; indeed, some of them mentioned it. Such visitors may be more open to ideas such as regulatory robot services than those who do not have a favorable impression of robots in general. Therefore, there is a risk that they overlook the demerits of our robot and instead report the positive aspects of it. Furthermore, we didn’t learn the opinions of most of the admonished visitors (including those who disobeyed) for such reasons as declining our interview requests or the interviewers were pre-occupied with other visitors. If we could gather such opinions, our interview results might be different or even consist of more negative opinions.
Third, our interview results and observations might not accurately represent the general public’s response to the robot because we only tested our robot at a children’s event with families. Perhaps, the robot may have received more favorable responses, from families with children. The parents tend to be more polite in front of their children. Similarly, the children are highly unlikely to bully or ignore the robot with their parents standing next to them. The response of other groups to our robot is unknown. Furthermore, our interview results are limited to the staff members of one particular event. Thus, their opinions cannot be generalized to staff that serve at different types of events.
Finally, our findings cannot be directly applied to other countries with different cultures. People’s reactions to the robot and their opinions are highly biased by their own cultural backgrounds. Their expectations about a security robot align with a human security guard’s role. We conducted this study in Japan where human security guards are unarmed, play a friendlier role, and are also engaged in customer service. People are familiar with such services and cooperate with a guard’s request. Therefore, some people might comply with a robot in the same way that they would with a human guard. On the other hand, in many countries, security guards do not play such a friendly role. They are equipped with weapons and are limited to security-related tasks. In such places, applying our robot (unarmed and friendly-looking) for queue management or crowd handling might be less effective because people might easily ignore its requests. Therefore, to apply a queue-managing robot in other countries, we must modify its design based on the cultural context and people’s expectations.