[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article
Open access

Field Trial of a Queue-Managing Security Guard Robot

Published: 28 October 2024 Publication History

Abstract

We developed a security guard robot that is specifically designed to manage queues of people and conducted a field trial at an actual public event to assess its effectiveness. However, the acceptance of robot instructions or admonishments poses challenges in real-world applications. Our primary objective was to achieve an effective and socially acceptable queue-management solution. To accomplish this, we took inspiration from human security guards whose role has already been well received in society. Our robot, whose design embodied the image of a professional security guard, focused on three key aspects: duties, professional behavior, and appearance. To ensure its competence, we interviewed professional security guards to deepen our understanding of the responsibilities associated with queue management. Based on their insights, we incorporated features of ushering, admonishing, announcing, and question answering into the robot’s functionality. We also prioritized the modeling of professional ushering behavior. During a 10-day field trial at a children’s amusement event, we interviewed both the visitors who interacted with the robot and the event staff. The results revealed that visitors generally complied with its ushering and admonishments, indicating a positive reception. Both visitors and event staff expressed an overall favorable impression of the robot and its queue-management services. These findings suggest that our proposed security guard robot shows great promise as a solution for effective crowd handling in public spaces.

1 Introduction

Social robots have been introduced to public or semi-public spaces to work on behalf of humans by leveraging such potential benefits as providing human-like services, enhancing specific atmospheres (e.g., improving the attractiveness and enjoyability of stores and shopping malls), and serving as inexpensive labor [31, 33, 40]. Robots are expected to support human workers by undertaking labor-intensive, repetitive, stressful, and dangerous tasks. Researchers have extensively explored the potential services that social robots can offer in public spaces [2, 21, 29, 42]. One such service is regulating visitor behaviors. However, few studies have investigated the potential of robots to deliver such services in public settings [13, 27, 38].
Regulating the behavior of visitors in public places is crucial for maintaining smooth operations and ensuring a civil and safe atmosphere. In such places, security guards, police officers, shopworkers, and receptionists play a role in regulating people when necessary. They ensure that visitors adhere to specific rules, such as refraining from using their phones while walking in crowded areas, avoiding smoking where prohibited, and not bringing prohibited items into stadiums. When violations occur, these employees reprimand the involved individuals. Sometimes, the mere presence of staff will discourage inappropriate behavior. However, attempting to regulate the actions of strangers is often stressful and might even put human employees at risk. If robots can assist in regulating visitors on behalf of human employees, the workload of the latter cohort can be significantly eased, leading to an enhancement of their overall work experience.
Applying a robot to regulate people’s behavior in public spaces is challenging. Many issues raise doubts about the effectiveness of such robots: their low social power, people’s lack of respect toward them [49], and the tendency to disregard their admonishments [27, 38]. Some individuals perceive as less likable and potentially unsafe robots that exhibit controlling behaviors, including admonishing and punishing [22]. This negative perception among people might fuel a public backlash against utilizing robots to regulate individuals in public environments [30]. Consequently, when designing a robot that is intended to regulate people in the real world, researchers must carefully consider both the effectiveness and the social acceptability of their designs.
In this study, our objective was to specifically design a robot for regulating people in public spaces. We chose to focus on managing queues because they present a novel potential application for robot services. Long queues are frequently observed at such public events/settings as concert halls, stadiums, movie theaters, and airports. The staff at these locations must ensure that individuals are properly lined up and remind them to refrain from engaging in inappropriate behaviors that may disrupt the queue or disturb others, such as queue jumping or obstructing forward movement. In Japan, security guards often handle queue-management responsibilities, including guiding visitors to the end of the queue, making announcements, and monitoring/addressing inappropriate conduct. Unfortunately, for human workers, managing queues can be a monotonous and tiresome task.
Our aim is to develop a robot capable of effectively managing queues while simultaneously gaining societal acceptance, resulting in people who will follow its guidance with minimal resistance. To achieve this goal, we seek to identify a design that satisfies these criteria. Hence, our first research question:
RQ1: How can we develop an acceptable and effective robot for regulating people in public spaces?
Our approach involved learning how to design a robot based on the effective and accepted role of a human security guard in society. Security guards possess a high level of social power and legitimacy and garner much greater compliance from the general public compared to generic citizens [5]. Since a crucial aspect of their role is embodied in their professional image, we imbued our robot with the appearance of a professional security guard. We anticipate that such a design will enhance the robot’s social power, facilitate people’s understanding of its role, and improve the acceptance and compliance to its requests.
To create a professional impression for our robot, we incorporated the following three key features associated with a security guard’s image: duties, professional behavior, and professional appearance. We conducted interviews with three guards with experience in queue-management services to gain insights into their duties. Based on our findings, in our robot, we implemented ushering, admonishing, question answering, and making announcements services and modeled its ushering behavior on that of a professional guard. We designed a customized guard’s uniform for our robot to enhance its professional appearance.
Moreover, we faced uncertainty regarding the acceptance of a regulatory robot’s service (i.e., a queue-managing robot) in real-life situations. Unlike robots that provide such friendly and supportive services as guidance, entertainment, and assistance, there is a greater likelihood that people will reject a robot that is attempting to control their behaviors and admonish them for mistakes. The current limited knowledge about regulatory robots does not adequately capture how people perceive them in their everyday lives. It specifically remains unclear how individuals will react if a robot were to admonish them for inappropriate behavior in real-life scenarios and whether such interventions are deemed acceptable. In light of this, we recognized the need to comprehensively investigate how people perceive a robot that is seeking to regulate their behavior in real-world situations. Our aim was to gain deeper insights into the acceptance and reactions of individuals when confronted with a robot’s attempts to control their actions. Consequently, we formulated our second research question:
RQ2: How do people in public spaces perceive a robot that is attempting to control their behaviors?
We addressed our second research question by conducting a 10-day field trial at a children’s amusement event during which our robot autonomously managed a queue of people. During this trial, we conducted semi-structured interviews with both the event staff and the visitors who interacted with the robot and experienced its ushering and admonishing services. Our primary objective was to gain insights into people’s acceptance of the robot, understand their reasons for complying with/disobeying its admonishments, and compare their perceptions of a robot’s admonishment with that from a human. Additionally, we wanted to assess the extent to which our robot could autonomously provide queue-management services in a real-world setting.
The remaining sections of this article are organized as follows. Section 2 provides an overview of related works in the field. Section 3 discusses the design considerations that influenced the development of our queue-managing robot. Section 4 provides a detailed explanation of our robot system itself. A field trial and its results are presented in Sections 5 and 6. Section 7 includes a discussion of our findings, and Section 8 concludes the article.

2 Related Works

2.1 Social Robots in Public Space

Social robot services in public spaces are emerging in a number of sectors, such as commerce, health care, education, and security. They sometimes take on human jobs, including shopworkers, tour guides, and receptionists, and in other cases, they complement or support human workers. Social robots in public spaces often engage in friendly services. One common robot service is guiding and providing information in complex environments like shopping malls, train stations, and museums [21, 36, 41, 42]. Furthermore, due to robots’ novelty and ability to provide an enjoyable experience, they attract customers to retail and hospitality facilities, such as shopping malls, stores, restaurants, and hotels. They serve as front-line staff and perform duties like welcoming, entertaining, advertising, and delivering food and other items [10, 29, 31, 39, 40]. Furthermore, robots are used for security services like patrolling and acting as a telepresence platform for human staff in public environments [7, 45]. They have been used for regulating visitor’s behaviors, for instance, cajoling adherence to social norms [27, 38] and health measures during the COVID-19 pandemic [13, 23]. In contrast to friendly services, robots’ intervention in regulating visitors’ behavior in public is minimal. Our study is expanding the use of regulating robots in public spaces by introducing a novel application for them. We developed a robot that manages a queue of people, a responsibility usually done by human security guards or event staff.

2.2 Robots That Regulate People

Researchers are showing more interest in investigating the potential of social robots that regulate people’s behaviors or encourage them to take actions that they are resisting. Past works explored a robot’s capability to force participants in laboratory settings to continue tedious tasks [12], improve task performance [22], and stay disciplined and avoid cheating during exams [3, 28]. Several studies applied social robots in public spaces for compelling people to follow social norms [27, 38], COVID-19 measures [13, 23], and behave honestly [14]. In most past works, robots just managed the behavior of a single person or a small group of people at a time. Limited research has investigated how to use a robot to regulate a crowd.
Although findings from previous studies revealed the potential of robots to regulate people’s behaviors, yet their capability of performing such a role remains obviously inferior to that of humans. People tend to comply less with robots that are attempting to control their behaviors [22]. Forlizzi et al. [14] suggested that when people perceive that they are being monitored by a robot, they tend to evaluate its instrumental capabilities, feeling that it is less capable (such as less intelligent and responding more slowly) and that they can escape from any consequences of wrongdoing. They tend to continue with their bad behaviors. Similarly, Schneider et al. [38] showed that people trivialize an admonishing robot due to its technological immaturity. People seem unwilling to accept robots as social peers who are imbued with the status to judge humans. Hoffman et al. [19] concluded that people feel less guilty after cheating in the presence of a robot than in the presence of a human. Some people have negative attitudes of robots that are trying to control their behaviors. For instance, Jois and Wagner [22] found that participants perceived a punishing robot as less-likable and felt less safe around it. These findings emphasize the importance of robot designs that evoke compliance and acceptance among people, especially if such robots are intended to work in public.
Several earlier works attempted to improve the compliance and acceptability to regulatory robots. Mizumaru et al. [27] developed an approaching strategy for a robot that admonishes pedestrians who use smartphones by modeling a human security guard’s admonishing approach and proved that more people comply when the robot uses their proposed model than with a friendly approaching method. Schneider et al. [38] investigated why people ignore a robot’s admonishments and proposed a counter-trivialization strategy to improve compliance. They found that significantly fewer people ignored the robot with their strategy. In addition, Petisca et al. [34] demonstrated that a robot can reduce cheating by exhibiting situationally aware behaviors toward the participants (intervening when cheating), as opposed to being non-situationally aware. One of our previous works [13] achieved an acceptable design for a shopworker robot with admonishing functionality. We proposed a design that harmonized friendly and admonishing services and found that the unadmonished visitors and shop staff had a positive attitude toward the design and were willing to use its service. However, we failed to investigate the opinions of the visitors who were admonished by the robot.
Note the following research gaps that are related to regulatory robots. First, although the existing strategies significantly improved people’s compliance, a noticeable portion of people continue to ignore regulating robots in public [27, 38]. Limited research exists on which robot designs are more accepted by society. Such limitations indicate a need to discover more effective strategies for improving compliance and raising the acceptability of robot designs. Furthermore, there is a lack of qualitative studies on how different stakeholders of public spaces perceive regulatory robots, such as those being regulated/admonished by robots and other employees who are expected to work with them. Our research addresses these limitations through the following contributions. We propose an acceptable and effective design for a queue-management service robot by studying the role of a professional security guard. We conducted a field trial at a public event and interviewed event staff and visitors who were regulated and admonished by the robot to learn more about their perception of the robot and its service.

2.3 Socially Aware Navigation

Socially aware navigation refers to the ability of robots to navigate human environments efficiently, safely, and in a socially acceptable manner [15]. This capability is essential for robots expected to serve in human-populated environments such as public spaces. Past works have examined robots’ socially aware navigation capability in a variety of scenarios such as approaching, passing, leading, accompanying, combined, and so forth [15]. Robot ushering behavior associated with queue-management service can be considered as one kind of socially aware navigation.
The socially aware navigation capability is evaluated in two main aspects: navigation performance and how people perceive the robots’ navigation. Researchers have used various metrics to evaluate those aspects. For evaluating navigation performance, success rate [32, 37] and navigation efficiency [26] are commonly utilized metrics. People’s perception is evaluated in several components such as naturalness of behavior, human discomfort, and sociability [15]. The evaluation of the naturalness considers various factors, including its similarity to human movement (assessed using metrics such as displacement errors [4]) and smoothness (measured by metrics like path irregularity [26], velocity, and acceleration [25]). Sociability is defined as how well robot behavior conforms to social norms, assessed through metrics such as the number of intrusions into personal space [32, 48] and questionnaires [46]. Human discomfort involves both physical safety and psychological discomfort. Physical safety is measured using collision rate [24] and average minimum distance to humans during navigation [47]. Perceived psychological safety is often evaluated using questionnaires [43].

3 Design Considerations

Our approach is to learn an acceptable and effective design for queue-management based the role of a human security guard. Such services are deemed to be acceptable by modern societies. Most people are familiar with their roles and have some understanding of their duties. In Japan, besides general security related tasks, guards also perform queue management in public events and prevent inappropriate actions. Generally, people comply with a request by a security guard without much resistance.
An important design factor is the image of security guards. Social studies show that more people comply with a request from a uniformed security guard, even if he is acting out of his role, than a request by a generic citizen, a result that indicates their social power [5, 9]. If our robot were to possess such a professional image, people might readily accept it and cooperate with its queue-management service.
To create an image of a professional security guard for our robot, we studied the following three key aspects of a human security guard’s image:
(1)
Duties: Security guards are expected to perform certain duties during their queue-management service. Awareness of their responsibilities will improve the robot’s service quality.
(2)
Service behavior: Professional security guards receive special training for their job. Therefore, their service behavior is different from a novice person.
(3)
Appearance: Professional security guards can be easily recognized among the crowd.

3.1 Duties of Queue-Management Service

Queue-managing security guards are expected to perform certain duties. The inability or unwillingness to perform them will lead to a failed queue-management service. Thus, we interviewed professional security guards to deepen our understanding of the duties of queue-management services.

3.1.1 Interview Procedure.

Our approach was to learn from the experiences of security guards at as many types of events as possible to get a general idea of the duties of queue-management service. We interviewed three male professional security guards (ages 41, 59, and 65 years) who have at least 1 year of queue-management experience at two different types of public events. Collectively, their experience covers at least seven different types of situations, ranging from sports events, concerts, and festivals to sales, store openings, exam centers, and factories, all on various scales. Furthermore, they have a broader level of experience in different kinds of security services such as security at facilities, traffic control, and night patrolling. For example, one of the guards has experience serving as a security guard supervisor, holds certification for crowd control, and has over 23 years of service as a guard.
Interview questions were written, and interviews were conducted by one of the authors. The interviews were video-recorded and transcribed for analysis. We mainly inquired about the following three points:
Their duties during queue-management service;
Common inappropriate behaviors of visitors and strategies for handling them;
Expectations about using a robot for queue-management service.

3.1.2 Interview Results.

(1)
Duties of queue-management service
Avoiding problematic situations is the main goal of queue management. Security guards are assigned to locations where problems are most likely, such as a queue’s end, queue gaps (sometimes long queues are interrupted by roads/streets), or a long queue that bends around a corner.
Security guards perform the following duties during queue-management services: leading newcomers to the end of the queue (ushering), admonishing visitors who are engaging in inappropriate behavior (described in 2), and making announcements. The guards perform ushering and announce queue-end locations with language designed to guarantee that visitors line up properly and to prevent incorrect behavior. For example, sometimes visitors misidentify the end of line and enter its middle. If trouble happens, such as delays, they also announce such information to forestall potential dissatisfaction. The guards mentioned that since making announcements is considered a challenging task by some novice guards, they sometimes fail to make them. In addition, guards answer visitor questions about the event and surrounding area, such as the location of restrooms, shops, and its starting time.
(2)
Common inappropriate behaviors of visitors and a strategy for handling them
The common inappropriate behaviors of the visitors in a queue include failing to move forward in the line (e.g., talking without paying attention), queue jumping, lining up in a way that disturbs others (i.e., blocking a space through which pedestrians must pass), and so on.
The security guards basically admonish such visitors to stop their inappropriate behaviors. They request that the target person politely but firmly obey (if the impression is too soft, they might not comply). A clear explanation (voice) is also important. Security guards use such a polite strategy to minimize bad impressions when they must explain their reason to visitors: “Please make space for others to avoid disturbing their walking.” Visitors who perceive an admonishment as “an order” might complain to the management.
(3)
Expectations of using a robot for queue management
Security guards believe that ushering and announcing are tedious tasks. As a result, novice security guards often avoid making announcements. They believe that a robot is more suitable for doing tasks that have a monotonous nature than humans.

3.2 Modelling Professional Service Behavior

We modeled a human security guard’s ushering behavior in our robot to make its service more professional and effective. We especially wanted the visitors to naturally follow its ushering. We conducted a role-play of a queue-management situation with a hired professional security guard and observed his ushering behavior. To understand the nuances in a professional’s approach to the job, we compared the behaviors of professional guards and novices.

3.2.1 Case Study: Professional vs. Novice Ushering Behavior.

We modeled the ushering behavior of one professional guard to mimic an effective service behavior instead of making a general behavior model.
(1)
Participants
We hired a professional security guard (male, 45 years old) who has experience in ushering, crowd control, managing queues, and customer service at large public events and retail stores. He has been employed as a professional guard for 1 year, and before that, he worked part-time ushering visitors to public events. We also hired four participants (two males and two females) with no prior experience in queue management to represent novices.
(2)
Procedure
We set up a queue-management scenario for an imaginary event. Three people were hired as visitors. Two lined up at the entrance, and another acted as a newcomer. We conducted five (i.e., one with the professional and four with the novices) role-play sessions of queue-management services. One session consisted of 30 ushering incidents. In each session, the professional or novice who played the security guard role was asked to stay at the event’s entrance and lead a newcomer to the end of the queue. In 30 trials, the newcomer joined the queue by varying his starting point and walking speed. For each session, the person who played the newcomer was changed. The behaviors of the participants were video-recorded.

3.2.2 Observation.

The following is the professional’s behavior. He stood within 0.5–1 m of the last person, almost as if he were in the queue, and waited for newcomers (Figure 1(a)). When a newcomer walked toward the queue, the professional started to usher him in. Professional ushering behavior (Figure 2) consists of three main steps: acknowledging, yielding and pointing, and moving on. The professional acknowledged the approaching newcomer by looking and nodding to her (Figure 2(a)). As newcomers approached closer, the professional yielded his waiting location, pointed to the end of the queue, and said: “Here is the end of the queue” (Figure 2(b)). As newcomers lined up, the professional moved to a new waiting location at the queue’s end (Figure 2(c)).
Fig. 1.
Fig. 1. Professional’s waiting location vs. novice waiting location.
Fig. 2.
Fig. 2. Professional ushering behavior.
The following is a typical behavior of a novice. Unlike the professionals, three of the four novices waited at a location a relatively large distance (1.5–2 m) from the last person in the queue (Figure 1(b)); the other one waited within 1 m. When a newcomer was noticed, the ushering of all four novices resembled the manner shown in Figure 3. A typical ushering behavior of a novice consists of two steps: acknowledging and pointing. They acknowledge newcomers by looking at them (Figure 3(a)). Then they pointed at the queue’s end and asked them to get in line (Figure 3(b)). After that, the novices continued waiting at their waiting location for the next visitor (Figure 3(c)).
Fig. 3.
Fig. 3. Novice ushering behavior.
We identified three differences between the novice and professional ushering behaviors. (1) Waiting location: The professional’s waiting location is relatively near the last person (within 1 m) in the queue compared to a novice’s waiting location (between 1.5 and 2 m) (Figure 1). (2) Yielding when pointing: The professional yielded to the newcomers when pointing, whereas novices pointed from where they were standing (Figures 2(b) and 3(b)). This might be a result of the difference in waiting locations. In the novice’s case, a newcomer has enough space to approach the queue’s end. Therefore, no yielding is required. (3) Moving on after pointing: The professional moves to a new waiting location (i.e., updating his waiting location according to the newly lined-up visitor) every time a new visitor queues up. But novices do not make this decision. From our observations, we felt that the professional was trying to maintain the end of the queue (Figures 2(c) and 3(c)). However, we failed to ask him to explain his behavior, an oversight that is a limitation of our procedure. When we asked the novice participants, they said they were not maintaining the last position in the queue; rather, they chose a waiting location where they could easily guide visitors from various directions.
In addition to ushering, we observed that when there were no newcomers, the professional kept waving his hand and announcing the event information (Figure 1(a)). However, none of the novices made such announcements.

3.2.3 Ushering Model.

We defined the ushering model as follows:
(1)
Waiting location: The professional guard’s waiting location is less than 1 m from the last person in the queue and is almost aligned with the queue.
(2)
Three-step ushering behavior: Professional ushering behavior consists of the following three steps:
(a)
Acknowledging: Greeting the approaching visitors by making eye contact with them.
(b)
Yielding and pointing: Yielding to the visitor, pointing to the queue’s end, and saying: “Here is the end of the queue.”
(c)
Moving on: As the newcomer lined up, the guard moved to a new waiting location at the end of the queue.

3.3 Expressing Security Guard’s Role by Appearance

We dressed our robot in a uniform that resembles one worn by professional security guards to make its role clear to people and give it a competent appearance. An agent’s physical appearance evokes his or her role. People’s reaction to a request by an individual depends on his or her physical appearance [5]. For instant, when a standardly dressed person and another wearing a guard’s uniform approached people in the street and asked them to pick up a bag, more people complied with the latter person in uniform [5].
Human security guards wear a special type of uniform that makes them look professional and prominent among others even in crowded situations [1]. Such uniforms are associated with a certain degree of legitimacy that influences people to obey their requests [5, 9]. Thus, we believe a uniform is an essential feature upon which to erect a professional image for our robot security guard.
We made a customized uniform design for our robot (Figure 5). We chose a design that resembles a local company’s security guards because people are familiar with them. The design was customized based on the needs of our robot. We removed its sleeves and shoulder cord because they might have limited its arm movements. However, we retained such essential features as a hat, which helps people easily recognize that the robot is functioning as a guard.

4 Robot System

4.1 System Architecture

Figure 4 depicts the system architecture of our queue-managing robot. Basically, it selects target visitors for its service: approaching visitors to the queue as ushering targets and those who engage in inappropriate behavior as admonishing targets, using the target selection module (Section 4.4) and executes the appropriate behavior (Section 4.5). The robot localizes (Section 4.6.1) itself in the environment with a multilayered map that includes a pre-defined queue area (i.e., the area where visitors are supposed to line up) and the information received from an odometer and three-dimensional (3D) LiDAR. The people-tracking module (Section 4.6.2) maps out the locations and movements of the visitors in the surrounding area by accessing the information in the localization module. The queue-detection module (Section 4.3) locates the queued-up people with the people-tracking data. The target selection module (Section 4.4) uses the people-tracking data and the queue-detection module to select the ushering and admonishing targets based on the visitor’s activities, which are then used to select appropriate behavior for execution. During the execution, each behavior reads the sensory data and controls such robot output as utterances, gestures, and velocity. If the visitors want to talk with the robot, a human operator (Section 4.7) performed the speech recognition task on its behalf.
Fig. 4.
Fig. 4. System architecture.

4.2 Robot

We implemented the queue-management functionality in a robot named Robovie (Figure 5), which is 120-cm high and has a humanoid appearance. We made this choice as Robovie has features enough to mimic human security guards. Its upper body has 11 degrees of freedom that allowed it to make gestures (such as pointing, gazing, and waving) that are necessary to perform behaviors in queue-management service. It has a QFS-02-ver1 mobile base developed by Qfeeltech for navigation. Its omnidirectional wheels enable rotation and sliding motion which satisfies the navigation requirement of ushering behavior. It can move in any direction at a maximum speed of 0.8 m/sec and a maximum angular velocity of 60 degrees/sec. We attached four laser range finders (Hokuyo UTM-LX) to its mobile base to detect obstacles during its navigation. A LiDAR (Velodyne HDL-32E) was mounted on top of a pole at 143 cm on Robovie’s back for localization and people-tracking.
Fig. 5.
Fig. 5. Robot.

4.3 Queue Detection

We implemented a functionality to detect the queued-up people and tracked the last person using the people-tracking data. We tracked the last person in line for two reasons. First, we must determine the robot’s waiting location. According to the professional ushering model (Section 3.2.3), the robot should wait within 1 m of the last person. Second, we wanted to support the detection of people who are approaching the queue (i.e., potential ushering targets). We defined a queue area (where people might queue up) in advance (Figure 6). The robot detects queued-up people using their positions in the queue area by a pre-defined distance threshold value (i.e., lined-up distance \(=\) 1.5 m) and tracks the last person. If a new person enters the queue area within the distance threshold from the last person, it updates the newcomer as the last person.
Fig. 6.
Fig. 6. Map with pre-defined queue areas.

4.4 Target Selection

We implemented a functionality to select target visitors for ushering and admonishing based on their activities. This module uses people-tracking data and queue-detection modules for activity detection, which is updated every 0.1 second to support the visitor movements. The target selection is done as follows:
(1)
Ushering target: To select the ushering target, the robot first detects all the people approaching the end of the queue. The potential ushering targets are detected by computing the closest distance (\(d_{s}\)) to a person’s walking trajectory and compared with a pre-defined threshold value, \(d_{th}\) (2 m) (i.e., if \(d_{s}\leq d_{th}\), then since the person can join the queue, she/he is a potential ushering target). Next, the distance from the queue’s end to all the potential ushering targets is computed, and the closest one is identified as an ushering target (Figure 7).
(2)
Admonishing targets (i.e., visitors who engage in inappropriate behaviors): For the admonishing target selection, first the robot identified visitors who are making inappropriate behaviors and waits for the operator’s confirmation. We added a decision-confirming step to avoid erroneous target selections. Visitors are detected when they are engaged in two types of inappropriate behaviors: failing to move forward in the queue and blocking the queue area because they have not properly queued up. For identifying people who have not moved forward in the line, the robot computes the distance between the adjacent queued-up people, compares it against a threshold value (1.5 m), and detects those whose queued-up distance exceeds the threshold. To distinguish those who are waiting in the queue area without queuing up, it considers all the people waiting in the queue area and those who have already lined up. Then, the remainder are identified as not lined up.
Fig. 7.
Fig. 7. Detection of potential ushering targets.

4.5 Behavior Implementation

4.5.1 Ushering Behavior.

In the ushering behavior, the robot guides new visitors to the queue’s end. We implemented the professional security guard’s ushering model described in Section 3.2.3. Basically, the robot stands in the next spot that a newcomer should fill and it yields that spot to the newcomer (Figure 8). The robot’s ushering behavior was implemented with two key elements: ushering target selection and ushering behavior execution. Following the professional guard’s waiting location, the robot waited within 1 m of the last person in the queue, giving enough space for visitors to pass through, while watching for newcomers to the queue. It autonomously selects the ushering targets, as described in Section 4.4. Next, the robot executes its ushering behavior in three steps: acknowledging visitors (Figure 8(a)), pointing out the queue’s end and yielding (Figure 8(b)), and moving to the next position (Figure 8(c)). First, when a target person reaches 7.5 m (threshold distance for acknowledgement, \(d_{ak}\)) from the next spot in the queue, the robot starts looking at his/her face. Once the person arrives closer to the queue end at a threshold distance of pointing \(d_{p}\) (5.5 m), the robot points to the end of the queue and says, “please queue up here” and allows the newcomer to join the line. While the person is queuing up (i.e., \(d_{i}\leq d_{q}\)), the robot considers him/her the queue’s new last person and moves to the new waiting spot. All the threshold values were tuned by trial and error.
Fig. 8.
Fig. 8. Ushering behavior execution.

4.5.2 Admonishing Behavior.

In this behavior, the robot admonishes visitors who are engaged in one of two inappropriate behaviors: blocking the queue area without joining the queue and failing to move forward. The admonishing behavior has two elements: admonishing target selection (i.e., a visitor engaged in inappropriate behavior) and execution admonishments.
The selection of admonishing targets is described in Section 4.4. After identifying a visitor who is engaged in inappropriate behavior, the robot requests confirmation from a human operator who must grant permission before an admonishment can be executed. We designed this confirmation step to reduce erroneous admonishments from inaccurate recognitions. After the admonishing target is confirmed by the operator, the robot warns the visitor by executing the following behavior sequence: approaching, admonishing, and thanking. In the first phase, the robot approached within 1.2 m of the target person. If the target person is in the queue, the robot moves parallel to the queue during its approach (Figure 9), appears next to the person, and makes an appropriate admonishment based on the situation: “Could you please move forward?” and indicates with a hand gesture or “Excuse me, please queue up along the chain.” If person corrects his behavior, then the robot executes its thanking phase and bows to acknowledge the cooperation. It repeats its admonishment up to three times. If the person still has not complied, the robot stops admonishing.
Fig. 9.
Fig. 9. Robot approaches to admonishing.

4.5.3 Greeting and Question-Answering Behavior.

In this behavior, the robot greets the visitors who approached it and answers their simple questions about the event. Once the visitor starts a conversation, the operator interprets the utterance and chooses an appropriate answer from a pool of pre-defined responses. We pre-defined a set of conversations: greetings for children, such answers to common questions as explanations of the event itself, ticket prices, event schedules, starting/closing times, and so forth. The robot greets the children waiting in front of it and encourages them to line up: “Hello. Welcome to Let’s Go Thomas (i.e., event name). If you don’t line up properly, you’ll miss Thomas,” and “Don’t be shy! Let’s get lined up!”

4.5.4 Announcing Behavior.

Figure 10 shows the robot’s announcing behavior, in which it describes the event’s location to help visitors find it. The robot makes its announcements when it is not engaging in any of its other behaviors (i.e., ushering, admonishing, and question answering). During its announcement, the robot waits near the end of the queue, looks around for visitors, waves, and occasionally informs them of the facility’s location. To minimize annoying the visitors by excessively repeating this announcement, we setup a 35-second interval between two announcements. This behavior was inspired by the professional guard’s announcing behavior.
Fig. 10.
Fig. 10. Robot’s announcing behavior.

4.6 Other Modules

4.6.1 Localization.

Reliable localization is required in open public environments, e.g., an exhibition hall with crowds of ambulatory visitors. In our implementation, we used a particle-filter-based method [18] with Velodyne LiDAR point cloud data to achieve six-dimensional localization for the robot. It localized itself at 10 Hz on a 3D point cloud map created beforehand by real-time collected 3D point cloud data and their odometry inputs.
Due to the daily change of features (e.g., barriers, signal boards) and the intensive human flows during the exhibition, the local environment around the robot in the task was always different from the pre-built map, a challenging situation for localization in a large open space. We configured the robot to utilize the point clouds in a 20-m radius range so that those unchanged features in the architecture (e.g., walls, gates, pipes in the exhibition hall) can be perceived to localize the robot more accurately among the crowds. The denser point cloud clusters near the robot were filtered out from calculation to balance the computational cost of localization after increasing the perception range.

4.6.2 People-Tracking.

For people-tracking, we applied an algorithm to the LiDAR point cloud data that consist of three steps: background subtraction, people-detection, and people-tracking. The background subtraction is done immediately after localization, compares the raw point cloud data with a 3D map of the environment, and removes the entities in the map. The remaining point cloud data show the movable entities. To detect people, clustering is applied, and bodies are detected if they approximately resemble a human’s size. Finally, at the people-tracking step, we applied a particle filter to the detection results. Even with such occlusions as passersby, it can continuously track locations within 15 m of the robot every 0.1 second.

4.6.3 Speech Recognition.

We used a semi-autonomous approach [16] for speech recognition instead of automatic speech recognition (ASR) due to such high ambient noise as loud music and the voices of many visitors in event environments. As shown in this previous work, a single operator can simultaneously control several robots. Even with a human operator, our system can provide a realistic service. The human operator typed the visitor utterances to the system without any supplementary information. Thus, even a novice with no prior knowledge about the event can serve as an operator.

4.7 Role of Human Operator

We used a human operator to compensate for the technical limitations of our robot system. This concept is similar to the Wizard of Oz approach [35], as well as the role of the safety driver in autonomous vehicles, who supervises and operates the vehicle to ensure safety and conformity [11]. The operator worked as another module, which in the future will probably be replaced with sophisticated autonomous modules with technological development. To explore the robot’s autonomous operation capability, we limited the operator assistant to the following situations where human intervention is essential for performing its duties. We used a graphical user interface (GUI) to give the operator control:
(1)
Selecting appropriate queue settings: We used multiple queue area definitions (Section 4.3) during the field trial. The operator is responsible for checking the size of the crowd in the queue and switching to a new queue when the current queue area is filled.
(2)
Confirmation of inappropriate detection: We delegated the task of confirming inappropriate behavior detection to the operator to avoid erroneous identifications by the robot. The operator determined the accuracy of the robot’s detection of inappropriate behaviors (not moving forward in the queue and waiting in the queue area without joining it) by observing such visualizations in its GUI.
(3)
Error handling: The operator was responsible for recovering localization and navigation errors. In situations where the robot’s sensor data don’t match the map, it is unable to find its true location on the map. Then operator fixed its location. When the robot moved too close to obstacles, its safety module stopped its autonomous navigation. In such cases, the operator recovered the robot’s navigation and removed its safety lock.
(4)
Speech recognition: We abandoned ASR due to its low accuracy in noisy environments. The operator interpreted the people’s inquiries and selected a relevant robot-response from a pool of utterances.

5 Field Trial

We conducted a 10-day field trial to study our robot’s ability to manage a queue of people at an actual public event and people’s interactions with it. Furthermore, we interviewed visitors as much as possible and the event staff to learn their impressions about our robot and its services.

5.1 Environment

We tested our robot at a children’s amusement event for 10 days from 29 April to 8 May 2022. The event was held in a large indoor space that had several facilities for children. Its visitors were basically families with elementary school kids. The robot engaged in its queue-management duty at the entrance of one of the amusement facilities where kids and parents were queued up for a train ride (Figure 11).
Fig. 11.
Fig. 11. Field trial environment.

5.2 Procedure

For our field trial, we used the robot (described in Section 4.1) that implemented the queue-management functionality. We put it in the queuing area in front of the train facility where it announced event information, ushered new visitors to the queue’s end, and admonished visitors who engaged in inappropriate behaviors. Visitors and event staffs could freely interact with the robot. We did not provide any prior description of its functionality or instructions to the visitors, except a simple sign at the entrance of the train facility: “A field trial of robot services is in progress, and videos are being recorded for analysis purposes.” We placed a human operator in a separate room who assisted our robot’s operation when needed. He supervised the robot’s operation, similar to the role of a safety driver [11] in autonomous vehicles. Furthermore, he served as the “wizard” [35] to compensate for the technical limitations of the robot system. He updated queue area settings, did inappropriate behavior confirmation, speech recognition, and dealt with errors. Generally, the robot provided its queue-management service 2.6 hours a day.
We conducted our field trial with five other investigators: a safety operator, two interviewers, a camera operator, and a coordinator. All stayed in non-disturbing locations, 3–4 m from the queue area. The safety operator ensured the security of the robot and the visitors. He could stop it with a Bluetooth controller in emergency situations, such as running children. Two interviewers stood near an exit of the train facility, observed the customer’s interactions with the robot, and approached them for interviews after they finished the train ride. The camera operator captured the field trial sessions. The coordinator ensured smooth operation in the field trial. She observed the field study, communicated with other investigators, and made any necessary decisions.

5.3 Data Collection

Since we wanted to understand our robot’s effectiveness in queue management, autonomous working capability, its robustness in a real environment, and impressions and acceptance of visitors and event staff, we collected three forms of data: system records, observations, and interviews. Furthermore, we used several measurements to evaluate the effectiveness of our robot.

5.3.1 System Records.

To understand our robot’s autonomous working capability, we collected system logs, which recorded the following information related to incidents in which the operator assisted the system: changing queue area settings, resolving navigation and localization errors, and speech recognition. We recorded the type of task and the times that the operator started and finished each assisting task. From those data, we calculated the total time that the robot was controlled by the operator and the total time during which it was autonomous.

5.3.2 Observations.

To understand the effectiveness of our security guard robot in queue management and to learn about visitors’ interactions with it, we recorded the field trials using three static video cameras and a handheld camera. We subsequently analyzed these recordings to understand the visitors’ reactions to the robot’s ushering and admonishing services, their typical interactions with the robot, as well as incidents involving human staff intervention in queue management.

5.3.3 Measurements.

We calculated several measurements based on our observation data to evaluate the effectiveness of the robot. We considered two aspects: compliance with the robot and performance of the robot’s queue-management service.
Rate of compliance to admonishing: Ratio between incidents where visitors correct their behavior after the robot’s admonishment and total admonishing incidents.
Success rate of ushering: The ratio between the number of successfully ushered people and the total number of people who joined the queue.
Service failure rate: The ratio between the total duration of robot service failure and its total service time. We defined the robot’s service failure incidents as human staff intervention incidents for queue management because the robot couldn’t solve the situation. We measured the failure time duration as the time from when the problem started to when a staff member intervened.
Additionally, we accessed the “perceived effectiveness of ushering service” using qualitative data (i.e., interviews).

5.3.4 Interviews.

We conducted semi-structured interviews with the queued-up visitors and the event staff to learn their impressions and acceptance of our robot and its queue-management service. We abandoned the use of quantitative measures for evaluating the social acceptance due to the difficulty of employing such measures in the nature of our field study.
We formulated our interview questions to evaluate the following aspects inspired by works on the acceptance of social robots and social aware navigation: attitude toward the robot and its services, usefulness of the robot’s service (from staff), intention to use (from staff), and perceived naturalness of the robot’s ushering behavior. In each question, we asked them to elaborate on their answers. The interviews were voice-recorded and transcribed for analysis.
Visitor interviews
We approached the visitors who were queued-up and had received the robot’s service (ushering and admonishing) for interviews while they were waiting in the queue or after they had finished the train ride. We omitted the children because they were too young to properly answer. The visitors answered the following questions based on their experience with the robot.
(1)
Impression of the robot and ushering service:
What did you think about the robot and its ushering services?
(2)
Ushering service quality: To understand the effectiveness and quality of the ushering service, we asked them the following questions:
(a)
How did you interpret the robot’s behavior?
(b)
Did you obey (or disobey) the robot’s instructions?
(c)
Did you feel that the robot’s ushering behavior was natural or unnatural?
(3)
Admonishing service: We heard the opinions of visitors regardless whether they had been admonished or not. Visitors who weren’t admonished were asked to imagine engaging in an inappropriate behavior and being subsequently admonished by the robot. We asked them the following questions:
(a)
Did you comply/will comply with the robot’s admonishment?
(b)
How did the human’s admonishment feel compared with the robot’s?
(c)
What was your impression of the robot that admonished people?
(4)
Comparison of suitability:
Which is better for a queue-management service: a robot or a human?
Staff interviews
Interviews with the event staff members were done on the last day of the field trial to allow them enough time to get familiar with the robot. The questions for the staff interviews were slightly different from those for the visitors based on the different purposes for using the robot. We asked them the following questions:
(1)
Impression of the robot
What was your opinion of the robot security guard?
(2)
Impression of overall service
(a)
How did you feel about the overall service provided by the robot?
(b)
Was its service useful?
(3)
Ushering service
(a)
What did you think about the robot security guard’s ushering service?
(b)
Was its ushering service natural?
(4)
Admonishing service
(a)
What did you think of the robot-admonishing service?
(b)
Who provides a better admonishing service: a robot or a human security guard?
(5)
Comparison of suitability for queue-management
Which is better for a queue-management service: a robot or a human?
(6)
Intention to use in future
Would you like to interact with a robot security guard’s service in future events?
In addition, we gathered such background information of the staff as initial expectations of the robot as well as both their working experience and prior experience with robots.

6 Results

6.1 System Statistics

The performance statistics of our robot are shown in Table 1. It worked 10 days at the event. Its total queue-management service time was 1,539.2 minutes. Among them, 1,494.7 were autonomous service time minutes: 97.1% of its total working time. The human operator was on standby for the robot during its entire service. However, he actually served only 44.5 minutes, including 4.8 minutes of updating the queue area settings, 6.3 of resolving localization errors, 30.9 of admonishing-target confirmation, and 2.5 of speech recognition. The system requested operator assistance for 265 incidents: 143 occasions of updating queue area settings, 18 events of resolving localization errors, 101 occasions of admonishing-target confirmation, and 3 speech recognition incidents when visitors wanted information about events. Among the 101 requests for admonishing-target confirmation, the operator identified 62 incidents as inappropriate behaviors and the remaining 39 as false detections. Thus, the robot admonished visitors in 62 incidents. The average time between operator intervention incidents was 5.8 minutes.
Table 1.
MeasurementValue
Total days of service10
Total number of visitors served by robot2,486
Total service time1,539.2 minutes
Autonomous service time1,494.7 minutes
Operator service time44.5 minutes
Average time period between two incidents of operator assistance5.8 minutes
Total admonishing time30.9 minutes
Total ushering, announcing, and question answering time1,508.3 minutes
Total duration of robot service failure3.3 minutes
Number of valid admonishing incidents54
Rate of compliance to admonishing52/54
Success rate of ushering2,421/2,486 (97.4%)
Service failure rate0.2%
Table 1. System Statistics
The robot served 2486 visitors during its service time. It spent 1,508.3 minutes ushering, announcing, and question answering and 30.9 minutes admonishing. That means, the robot only spent 2.0% of its working time admonishing visitors. Out of the 62 admonishing incidents, we excluded 8 from our analysis upon closer evaluation, as they were found not to be examples of disobedience (see Section 6.2.2). Thus, 54 admonishing attempts were identified as valid. Among them, 52 incidents involved visitors complying with the robot. Among the 2,486 visitors who joined the queue, the robot successfully ushered 2,421 visitors. That means, ushering success rate is 97.4%. During queue management, the robot experienced service failures in four incidents, resulting in staff members intervening in the queue-management process. This involves situations where the robot failed to direct visitors who formed queues in the wrong direction, and where visitors who did not intend to join the queue ended up waiting closer to queue area. The total duration of robot service failure is just 3.3 minutes, which is 0.2% of its total service time.

6.2 Interaction Scenes

6.2.1 Typical Interactions.

Families with pre- or elementary-school children were the primary visitors to the event. When they approached the train facility the robot ushered them in. Figure 12 shows a typical ushering incident. After the robot pointed to the end of the queue and asked visitors to join it, most obeyed. Some responded to the robot’s request by nodding, saying they understood, or thanking it. However, around 10% of the visitors didn’t join the queue after the first request from the robot. They seemed curious about the robot, especially the children. They stopped in front of it and observed its behavior for a moment and then joined the line. Children often did not voluntarily follow the robot’s request. Instead, their parents explained its request and encouraged them to comply. During its non-peak time, the queue generally had between 2 and 12 people and from 10 to 30 during its peak time. The peak time lasted for about 90 minutes (usually between 10:30 a.m. and 12:00 p.m.). Figure 13 shows the robot’s queue-management service during peak time.
Fig. 12.
Fig. 12. Robot’s ushering behavior.
Fig. 13.
Fig. 13. Robot managing a queue during peak time.
A considerable number of the visitors approached the robot to talk when they had free time. Some of them were not necessarily aiming to line up for the train ride but were attracted by its presence. The majority only attempted friendly conversation. They usually greeted the robot, asked its name, and wished him good luck with his duty. A few visitors asked questions related to the train facility, such as ticket prices. They seem impressed when the robot was able to answer their questions. Most visitors took pictures of their children with the robot and said “goodbye.”
Children exhibited various reactions to the robot. Most were interested in it. Many who came to use the train facility or were heading elsewhere approached the robot. Some ran toward it after noticing it before their parents arrived; others approached with their parents. Parents encouraged their children to interact with the robot and sometimes held toddlers in front of its face. They observed the robot, touched it, held its hand, talked to it, and hugged it (Figure 14). On the other hand, some children especially toddlers seem scared and tried to avoid it.
Fig. 14.
Fig. 14. Children’s interactions with robot.

6.2.2 Admonishing Incidents.

The robot admonished visitors who didn’t move up in the queue, were waiting in the queue in a disorganized fashion, or were waiting near its end without joining it. We analyzed the admonishing incidents to understand how visitors obeyed the robot. The robot admonished visitors in 62 incidents. However, we removed eight admonishing incidents from our analysis because closer evaluation identified them as actually not examples of disobedience. For example, five visitors corrected their behavior just before the robot admonished them, and sometimes human staff influenced the visitors’ behavior. In two incidents, staff approached visitors who were waiting in front of the robot as their children aggressively touched it just before it admonished them to line up, and the operator wrongly judged the visitor’s behavior. In one incident, even though the visitors looked as if they were engaged in inappropriate behavior, when the robot approached them closer and admonished them, the operator realized they were “innocent” and stopped further admonishments. Thus, we identified only 54 valid admonishing attempts with which we judged the visitor’s obedience to the robot.
In 52 of the 54 admonishing attempts, visitors corrected their behavior after hearing the robot’s admonishment. In two of them, visitors repeatedly ignored the robot’s admonishments, and human staff were forced to intervene. A majority of the visitors who initially ignored the robot corrected themselves after hearing its admonishment, and others complied with the second or third admonishment. Interestingly, several visitors seem to be amazed by the robot’s admonishing behavior and laughed. We didn’t observe any aggressive reactions or arguments between the robot and the visitors.
Figure 15 shows an example of a successful admonishing incident that involved two mothers and their children. They were waiting in line as a group, which made the line look disorganized (Figure 15(a)). After noticing their behavior, the robot approached and asked them to lineup along the chain (Figure 15(b)). They immediately complied (Figure 15(c)). The robot thanked them for their cooperation (Figure 15(d)). They seemed impressed by its behavior.
Fig. 15.
Fig. 15. Successful admonishing incident.
Figure 16 shows an unsuccessful admonishing incident, which involved a mother and her two children who came for a train ride. First, they stood in the queue in a disorganized way. The robot noticed and admonished them to line up along the chain (Figure 16(a)). But they ignored the robot and sat on the ground near the queue’s end. The robot admonished them two more times (Figure 16(b) and (c)). Despite the fact that the mother appeared to have heard the robot, she continued to ignore it and remain pre-occupied with talking to her children. The robot abandoned its admonishment attempts. A staff member observed the incident (Figure 16(c)) and approached the group (Figure 16(d)) because they completely ignored the robot. As she approached, the mother finished her task and with her children joined the queue (Figure 16(e)).
Fig. 16.
Fig. 16. Failed admonishing incident.

6.3 Interview Results of Visitors

During the field trial we conducted semi-structured interviews with 87 visitors (including the 6 people who were admonished by the robot) to learn their impressions of it and its queue-management service. We did not interview any children.
We analyzed our interviews with the visitors with a qualitative content analysis method to identify common opinions and the reasons behind them [17]. We used a bottom-up approach to form exclusive top categories (i.e., opinions) and subcategories (reasons). Thus, the category labels are defined based on the interview data. We classified a given answer to one of the top categories (i.e., opinions) and the given reasons for having such an opinion to the applicable subcategories under the top category. The category labels were defined by the authors, and the interviews were classified collaboratively with external coders.
We analyzed the visitor interviews under four topics: impressions about the robot and its ushering service, ushering service quality, admonishing service, and a comparison of the suitability for the queue-management role between the robot and humans. Note that the number of interviews for some topics is less than 87 since the interviewers didn’t have a chance to question every person due to time limitations and human errors. The analysis results are presented below.

6.3.1 Impressions of Robot Security Guard and Ushering Service.

We analyzed the answers of 87 visitors to learn their impressions about our robot and its ushering service: 60 had positive impressions, 21 had neutral impressions, and 6 had negative impressions. Their opinions are summarized in Table 2.
Table 2.
ImpressionReason
Positive (60/87)Robot has specific merits (39/60)
Capable of queue management/ushering (32/60)
Neutral (21/87)Merits and demerits (19/21)
No specific impression (2/21)
Negative (6/87)Less capable of queue management (6)
Table 2. Summary of Visitor’s Impressions of Robot and Its Ushering Service
The 60 visitors who had positive impressions gave the following merits of the robot and its ushering service.
Robot specific merits: Thirty-nine of 60 visitors commented on merits unique to the robot: being cute, its novelty, good for children, solution to labor shortages, and a good non-contact method during the COVID-19 outbreak.
“I thought it was cute, especially being dressed like a policeman or a security guard” (robot specific: cute).
“Kids like that kind of thing, and they’ll enjoy seeing it” (robot specific: good for children).
Capability of ushering and queue-management: Thirty-two of 60 visitors felt robots are capable of ushering and queue management. They commented on the robot’s ability to sense visitors and usher them to the queue’s end. Some added that the robot’s ushering was easy to understand and human-like, and it grasped situations and navigated smoothly without colliding into people.
“It was so smooth that it was almost like being guided by a person, I thought it was amazing.”
“I thought it was wonderful. It watched people’s movements and quickly moved out of the way so as not to disturb them, guided them precisely, and identified the end of the line for them.”
Twenty-one of 87 visitors had a neutral impression about the robot and its ushering service.
Merits and demerits: Most (19 of 21) provided both merits and demerits. For the former, they mainly mentioned capability of ushering (8 of 19) and its specific merits (8 of 19). As demerits, they pointed out its low communication capabilities (insufficient variety in its conversations and facial expressions) (6 of 19), scary (especially to very young children) (5 of 19) low ushering ability (3 of 19), and anxious about safety around a robot (2 of 19).
“I thought it was cute, although I felt that it had less impact, was harder to understand, and a little different from what humans would do” (robot specific merit (cute), less capable of ushering).
“I think it should have a few more friendly words for kids. But, overall I think he did a great job and guided us properly” (low communication capability, capable of ushering).
No specific impression: Only 2 of 21 visitors didn’t have any specific impressions about the robot and its ushering service.
Only 6 of 87 visitors expressed a negative impression about the robot and its ushering service. They thought that the robot lacked queue-management capability.
Less capable of queue management: Visitors felt that the robot was not able to perform its ushering duties due to such limitations as its soft voice, mismatched instructions, not as capable as humans, and its machine-like voice was less impressive than that of humans:
“I thought it was slightly less impressive because it had a machine-like voice, and unlike a human voice, it didn’t grab my attention as much.”

6.3.2 Ushering Service Quality.

We analyzed the visitors’ opinions about the effectiveness and naturalness of the robot’s ushering behavior.
(1)
Effectiveness of ushering service
We analyzed the visitor responses about the understandability of its ushering service and their compliance to determine whether the robot’s ushering was actually effective. We identified two opinion categories from the visitors’ answers: adequate to follow and inadequate to follow. We labeled the visitors’ comments that described the robot’s ushering service as useful for queueing up and its merits as adequate to follow. On the other hand, those who thought the robot’s ushering was not useful for lining up due to its demerits (so they lined up by themselves) were classified in the inadequate to follow category.
Sixty-six of 74 visitors said the robot’s ushering service was adequate to follow, and only 8 said it was inadequate to follow. Table 3 summarizes our analysis of their opinions and justifications.
The visitors who said the robot’s ushering was adequate to follow gave the following reasons:
Capability of ushering: Forty-three of 66 said that robot has adequate capability for ushering. They described the good features of its behavior, such as a clear voice, effective utterances and gestures, prompt ushering, and so forth.
“He led me by holding out his hand. I thought, Oh, I should go this way. I was simply amazed that a robot could go this far!”
Easy to understand: Sixteen of 66 visitors particularly mentioned that following the robot’s ushering behavior was simple.
“It was easy to understand because it told me exactly where to line up.”
Understanding robot’s role: Four of 66 people said because they understood the robot’s role (due to its uniform and its appearance), they decided to follow its ushering:
“Well, I guess I felt like I had to follow the robot because he was dressed like a security guard.”
Robot-specific merits: Three of 66 visitors pointed out merits particular to the robot, such as novelty and its suitability to educate and provide services to children.
“A cute stationmaster like a robot was guiding us. I said to my children, ‘Let’s follow what Mr. Robot is saying.’ It is a good chance to educate them, and I will follow the robot’s request with my children.” (robot specific: suitable for educating children).
Same impression as humans: Two of 66 visitors stated the robot gave them the same impression as a human:
“I don’t really care whether a person or a robot tells me where to line up, I guess it’s the same thing.”
Eight visitors, who believed that the robot’s ushering was inadequate for following, mentioned the following limitations of its ushering service.
Less capable: Four of eight visitors complained that the robot was less capable of ushering because it responded too slowly, its instructions didn’t match the context, and its voice was too soft:
“The robot told me to get in line after I was already in line, so I thought, Oh, yeah, whatever” (less capable: too late ushering).
Hard to understand: Another four visitors said it was hard to understand the robot’s instructions and behavior:
“I couldn’t understand the robot. He said something about following a line, but I was wondering if he meant the line that resembled a batten. It was a little confusing.”
(2)
Naturalness
We analyzed 83 visitor impressions about the naturalness of the robot’s ushering behavior. Most based their answers on a comparison of robot and human behaviors. We identified three different opinions: natural (51/83), robotic but acceptable (19/83), and unnatural (13/83). Table 4 summarizes the visitor impressions and reasons.
Visitors who said that the robot’s behavior was natural gave the following reasons:
Appropriate gestures: Fifteen of 51 visitors described the robot’s gestures as appropriate, including hand gestures, eye contact, and body orientations during its ushering behavior. They thought they were natural:
“We made brief eye contact, which felt natural.”
Smoothness: Another 15 of 51 visitors felt the robot’s behavior was natural due to its smooth body movements and navigation:
“Well, I think its movements were natural. The waving of its hands and movements were not so jerky and soft.”
No unnatural aspects: Ten of 51 said they didn’t notice anything unnatural about the robot’s behavior:
“Well, I didn’t feel anything unnatural.”
Timely reactions: Five of 51 visitors described the robot’s ushering behavior as natural due to its timely reactions to human behaviors:
“It gave me sound guidance the moment I arrived, and it felt very natural!”
Other: Only 2 of 51 visitors noted other reasons, including the robot’s cuteness and quality behaviors:
“It acts so natural, it’s just very well made.”
The 19 visitors who described it as robotic but acceptable gave the following reasons:
Positively accepted robotic behavior: Eleven of 19 positively accepted and even preferred the robot’s behavior. They explained that robots and humans are different entities, and the robot’s current behavior fits itself:
“Well, isn’t robotic behavior good? The neck and hand movements are awkward like a bunraku puppet, although that gives a robotic or puppet-like impression.”
Less natural than humans but adequate for a robot: Eight of 19 visitors described the robot’s behavior as unnatural/less natural compared to a human but adequate for a robot:
“It was a bit unnatural, although it didn’t feel strange since he is actually a robot, not a human.”
People who described the robot’s behavior as unnatural pointed out following issues:
Unsmooth movements: Six of 13 visitors said they felt the robot’s behavior was unnatural due to its jerky body movements:
“I think some of its parts are unnatural. I didn’t think that it was smooth enough, often choppy.”
Low communication capability: Five of 13 visitors commented on the limitations in the robot’s communication, such as a lack of eye contact, incorrect body orientation, and a lack of language skills. These drawbacks led to them to describe its behavior as unnatural:
“If it is a security guard robot, I think it is necessary to have two-way face-to-face communication, but I don’t think its two-way communication is perfect yet.”
Other: Two other visitors pointed out other reasons, such as everything about the robot’s ushering behavior being unnatural and its robotic appearance (due to visible wheels) making it unnatural:
“Oh, it’s not natural. Nothing.”
Table 3.
OpinionReason
Adequate for following (66/74)Capable of ushering (43/66)
Easy to understand (16/66)
Understand robot’s role (4/66)
Robot’s specific merits (3/66)
Same impression as humans (2/66)
Inadequate for following (8/74)Less capable (4/8)
Hard to understand (4/8)
Table 3. Visitor Opinions about Effectiveness of Ushering Service
Table 4.
OpinionsReason
Natural (51/83)Appropriate gestures (15/51)
Smoothness (15/51)
No unnatural aspects (10/51)
Timely reactions (5/51)
Others (good qualities, cute) (2/51)
Robotic but acceptable (19/83)Positively accept robot-like behavior (11/19)
Less natural than humans but adequate for a robot (8/19)
Unnatural (13/83)Rough movements (6/13)
Lack of communication capability (5/13)
Others: many unnatural elements, robotic appearance (2/13)
Table 4. Visitors’ Impression about Naturalness of Robot’s Ushering Behavior

6.3.3 Admonishing Service.

We studied the visitors’ (i.e., admonished and unadmonished) impressions about the robot’s admonishment service under the following topics:
(1)
Obedience to robot’s admonishments
We analyzed the intentions of 79 (6 admonished and 73 unadmonished) visitors to obey/disobey a robot’s admonishment and the reasons for them.
(a)
Admonished visitors
All the admonished visitors we interviewed obeyed the robot’s admonition. Table 5 summarizes our interview analysis. They mentioned the following reasons for their obedience.
Admitted their mistake: Four of six of the admonished visitors said they obeyed the robot because they realized their mistake after it admonished them. One visitor who was warned for waiting in the queue in an unorganized fashion stated, “Oh, I understood, I should follow this line.”
Polite and courteous: One of six visitors complied with the robot because its admonishing behavior was polite and kind: “Yes. I think I’d probably comply because it is polite and its voice sounds good with a nice quality.”
Similar to human admonishing: Another 1 of 6 visitors said the robot and a human admonishment felt the same: “I don’t think there’s much difference between being told by a human or by a robot.” She added that “I agreed that I was in the wrong, so I didn’t mind which corrected me, a robot or human.”
(b)
Unadmonished visitors
We analyzed the unadmonished visitors’ intentions to obey if they received one. A large majority (66/73) stated they would obey. Another six said their obedience would depend on the situation; only one visitor said he/she would not obey. Table 6 shows the analysis result.
Sixty-six visitors who said they would obey a robot’s admonishment gave the following reasons for their intentions.
Similar to human admonishing: Thirty of 66 visitors stated the features shared by both the human- and robot-admonishing services. They pointed out that the robot’s admonishment would help them recognize and accept their own mistakes (admit a fault), give them a sense of being monitored, cause similar embarrassment from a human’s admonishment, and make a similar impression as a human staff member:
“If I were unconsciously doing bad behavior, I’d thank you for giving me a chance to fix it” (admitting a mistake).
“He’s watching me closely, and I should not do anything wrong” (feeling of monitoring).
“Well, I think it’s just saying it on behalf of a person, so I think it’s the same as a person, and so in my opinion, I’d stop that bad behavior, if it told me” (feel same as a human staff).
Adequate capability: Eleven of 66 visitors felt the robot had sufficient capability to judge people’s behaviors and admonish them:
“Well, I think that robots recognize and judge queues properly, and I think they are probably smart for making such requests. So, I’d obey.”
Merits unique to robot admonishing: As the reasons for their intentions to obey a robot’s admonishing, 11 of 66 visitors described merits unique to a robot-admonishing service: i.e., easier to accept than a human’s admonishment, less offensive, nice, cute and childlike, and effective with children.
“There’s something less sarcastic about it” (less offensive).
“I think it’s easier for explaining to children. Since Mr. Robot says, let’s line up properly or Let’s listen to Mr. Robot. Children might listen more to a robot than to an adult or a store employee” (effective with children).
Others watching: Six of 66 visitors said they would obey a robot because others in the vicinity were observing the incidents. Most stated that they would comply with the robot just because they are with their children and want to set a good example for them. Others claimed they complied because of the human operator or others in the vicinity:
“One reason to follow the robot’s admonishment is that I have children. If I were single, the robot’s admonishment would annoy me. But since I’m a mother, I want to avoid doing bad things in front of the children. So I follow the robot’s admonishment in front of the children” (setting a good example for children).
Six of 73 visitors said their obedience would depend on the situation. We identified the following two reasons for their opinions:
Uncertainty about robot’s capabilities: Four of six visitors expressed skepticism about the robot’s capability to judge the situation and admonish. Furthermore, one pointed out that a robot should use non-offensive words for its admonishment. Thus, their compliance would change based on their own judgment at the moment of the robot’s request:
“Depending on the situation, but I don’t know. I might think that the robot was making a mistake, I don’t know if it is correct or not.”
Factors independent from the type of admonisher: Two of six visitors said that, similar to a human’s admonishing situation, their compliance with the robot’s admonishment would depend on such factors as the time and the current situation:
“If a robot warned me, I don’t think my response would depend on the robot, but rather on the situation around me at the time.”
Only one visitor said she would not obey a robot’s admonishment:
“No, I don’t think I’d obey it, probably just because it is a robot.”
(2)
Feelings when receiving a robot’s admonishment vs. a human’s
We analyzed the answers of the 6 admonished and 73 unadmonished visitors to the question whether they felt the same or different when they were admonished by a human or a robot. Our findings are presented below.
(a)
Admonished visitors
Admonished visitors gave the following four types of opinions about their feelings when comparing a robot’s admonishment and a human’s admonishment: An admonishment from a human and a robot feel the same (2/6), a robot’s admonishment is easier to accept than that from a human (2/6), the feelings depend on the situation (1/6), and a robot’s admonishment is more uncomfortable than a human’s (1/6). Table 7 summarizes their opinions and reasons.
Two visitors who said human and robot admonishments feel the same stated that a robot’s admonishment has the same persuasive power as from a human. A person who was admonished for not moving forward explained: “Ah, I will probably follow the robot’s admonishment and move forward. It is the same feeling for humans.”
A pair of visitors who believed that a robot’s admonishment was easier to accept than a human’s gave the following reasons. One said that a robot’s admonishment was less annoying than a human’s because it did not contain any emotion: “When we are admonished by another person, the utterance generally contains some emotion, and that is annoying. Our emotional utterances cannot annoy a robot. Since it is just explaining facts without emotion, it’s less annoying.” The other preferred the robot-admonishing service: “I’d rather be corrected by a robot than by a person. My impression is different based on how a person says things or from his tone of voice. If a person is soft-spoken, that’s fine, but if a person has a harsh tone, I might struggle to follow his request, even though I know I should. But if a robot says it, it’s like, oh, yes, I see.”
The person who stated that his feelings would depend on the situation mentioned that it would also depend on his own mental state at the moment: “Ah, if I’m feeling relaxed, then I don’t feel any bad impressions.”
The visitor who believed that a robot’s admonishment is more uncomfortable than a human’s described that the robot’s lifeless impression and/or the unpredictability of its intentions made him scared. He explained: “I felt like I was slightly being pressured in front of the robot, something like the impression of an inorganic object. I had an impression what it is, and I felt scared or something like that. When I’m talking with a robot, I can’t imagine what it will say next. But with a person, I can get the message that I’m standing in the wrong location, before it tells me where to line up. With a robot, I might feel anxious because I can’t anticipate what it might say.”
(b)
Unadmonished visitors
We studied the opinions of 73 unadmonished visitors about how they would compare their feelings about a robot’s admonishment vs. or a human’s. We identified five different opinions: a robot’s admonishment is easier to accept than a human’s (43/73), the admonishments of a human and a robot feel the same (17/73), a human’s admonishment is more powerful than a robot’s (6/73), a robot’s admonishment is more uncomfortable than a human’s (3/73), and other (4/73). Table 8 summarizes our analysis.
Forty-three of 73 visitors said a robot’s admonishment would be easier to accept than a human’s. They commented on the merits of a robot’s admonishment, such as being less uncomfortable or offensive (22/43) and emotionless (i.e., unlike humans, robots don’t have emotions) (15/43). Furthermore, contrary to the human admonishment cases, they said they would not react emotionally to a robot’s admonishment (4/43). In addition, 4 of the 43 visitors noted that a cute robot’s admonishment would be easier to accept (4/43):
“I think robots consider many things, such as ways of talking or expressing admonishment to avoid uncomfortable representation, so I don’t think it gives a bad impression” (less offensive).
“There are no unnecessary emotions, so I can follow it smoothly because I’m just following the rules (of this location)” (robot’s admonishment is emotionless).
Seventeen of 73 visitors stated that a human’s admonishment and a robot’s would feel the same. Twelve mentioned that a robot has the same level of persuasiveness as a human. Another four noted that they have no particular bad impressions about a robot’s admonishment:
“I just feel like, oh, I must’ve made a mistake in the row, or something like that. My impressions of the admonishments of the robot and the human are the same. Um, the robot and the person are making the same utterance and the same message, so I understand” (identical level of persuasiveness).
Six of 73 visitors said they feel a human’s admonishment is more powerful than a robot’s due to the characteristics of the former. Four described humans as more persuasive and harder to ignore. Three others described human admonishments as being emotional and hence more powerful:
“A human’s admonishment is more forceful” (humans are more persuasive).
Three of 73 visitors thought a robot’s admonishment would be more uncomfortable than a human’s for the following reasons. Two mentioned that being admonished by a robot would be more embarrassing than similar treatment from a human. Another felt doubt about the robot’s ability to judge the situation.
“I might feel more embarrassed if a robot chastised me than if a person did…. ” (a robot’s admonishment causes more feelings of embarrassment than an admonishment from a human).
Four of 73 visitors had other opinions. Three said they couldn’t imagine how to compare being admonished by a robot and a human. Other visitor dismissed the robot because it was just a machine.
(3)
Impressions about robot that admonished people
We analyzed the opinions of 80 visitors (5 admonished and 75 not) about robots that admonish people. Our analysis results are presented below.
(a)
Admonished visitors
We analyzed five admonished visitors’ impressions about our robot-admonishing service (Table 9). All had positive impressions. We were able to hear reasons for these positive impressions from four of the visitors. Three of them said they had no particular resistance to a robot-admonishing service. One said that a robot’s admonishment is easier to accept than a human’s: “Well, such warnings are good, because we don’t have to apologize, and since it is a machine, it’s easy to accept its request. For the human-human case, a person might get angry and cause trouble. However with a robot, I just think, Okay, I see.”
(b)
Unadmonished visitors
Table 10 summarizes our analysis of the opinions of the unadmonished visitors about a robot that admonishes people. A majority (65/75) had a positive impression of such a service. They gave the following reasons for their positive impressions.
Merits specific to robot-admonishing service: Forty of 65 mentioned positives specific to robot-admonishing services that are absent in human-admonishing services, such as admonishing is easier for a robot than a human, a robot’s admonishment is easier to accept than a human’s, impressed with its admonishing capability, avoids conflicts, effective with children, and cute.
“It’s easier to accept a request from a robot” (robot-specific: a robot’s admonishment is easier to accept than a human’s).
“It’s good because the robot can say things that are harder to accept from people” (robot specific: admonishing is easier for a robot than a human).
Similar to human-admonishing service: Twenty-five of 65 of visitors felt the robot-admonishing service resembles a human-admonishing service. They explained that they have no particular resistance to the former because a robot is a good substitute for a human, and robots provide a feeling of security:
“I don’t feel any differently between a human or a security guard robot, because both are just doing their jobs” (similar to human-admonishing service: no resistance).
“Oh, a robot gives the impression and a sense of security, and I know that they are looking out for me” (similar to human-admonishing service: feeling of security).
Eight of 75 unadmonished visitors expressed a neutral impression about robot- admonishing service for the following reasons:
Uncertainty about effectiveness: Five of eight visitors were uncertain about the effectiveness of a robot-admonishment service. Since they were not directly warned by the robot, they tended to make assumptions about how other people might behave and express broad opinions. They stated obedience would depend on the person being admonished. People who intentionally engage in inappropriate behavior would probably refuse to comply. One wondered whether children could understand and comply with a robot’s admonishment.
“I think that those who unconsciously make mistakes will think, oh, I’m sorry, but those who are acting intentionally will probably just dismiss such admonishments as nagging.”
Depends on the admonishing language: Two of eight visitors said their impression would depend on how the robots admonish, whether they are impolite or not. If the robot’s admonishment is not rude, they would have a positive impression:
“Well, it’s okay if the robot’s language is polite.”
Effective but more embarrassing than a human’s admonishment: One of eight visitors who had a neutral opinion said that although a robot’s admonishment is effective, it is more embarrassing than being admonished by a person.
“Oh, being admonished by a robot was more embarrassing than by a human. But after being admonished by it, I realized the robot was right.”
Only 2 of 75 visitors had negative impressions about the robot-admonishing service. Both preferred humans for such services. They thought that people will resist being admonished by a machine and that a robot’s admonishment draws too much public attention, which increases the intensity of being embarrassed in the receiver.
“I don’t think people probably want to be told what to do by a robot. People won’t be accept being admonished by a machine.”
Table 5.
ObedienceReason
Obeyed (6)Admitted fault (4/6)
Polite and courteous (1/6)
Similar to human admonishing (1/6)
Disobeyed (0)-
Table 5. Summary of Interview of Admonished Visitors about Compliance to Robot’s Admonishment
Table 6.
IntentionReason
I will obey (66/73)Similar to human admonishing (30/66)
Adequate capability (11/66)
Merits unique to robot admonishing (11/66)
Others observing (6/66)
Obedience depends on situation (6/73)Uncertain of robot’s capabilities (4/6)
Factors independent from type of admonisher (2/6)
I will not obey (1/73)Because it is a robot
Table 6. Summary of Analysis of Intention of Unadmonished Visitor to Comply with Robot’s Admonishment
Table 7.
ImpressionReason
Human’s and robot’s admonishment feel identical (2/6)Identical level of persuasiveness (2/2)
Robot’s admonishment is easier to accept than human’s (2/6)Robot admonishment is emotionless (1/2)
No emotional reaction to robot’s admonishment (1/2)
Impression depends on situation (1/6)Depends on mental state
Robot’s admonishment is more uncomfortable than human’s (1/6)Scary
Table 7. Summary of Analysis of Admonished Visitors’ Feelings When They Are Admonished by a Robot or a Human
Table 8.
FeelingReason
Robot’s admonishment is easier to accept than human’s (43/73)Less uncomfortable/less offensive (22/43)
Robot’s admonishment is emotionless (15/43)
No emotional reactions to robot’s admonishments (4/43)
Robot is cute (4/43)
Admonishments of humans and robots feel identical (17/73)Same level of persuasiveness (12/17)
No particular bad impression (4/17)
Human’s admonishment is more powerful than a robot’s (6/73)Human is more persuasive (4/6)
Human’s admonishment is emotional (3/6)
Robot’s admonishment is more uncomfortable than a human’s (3/73)More embarrassing than from a human (2/3)
Doubt about capability (1/3)
Other (4/73)Difficult to imagine the feelings (3/4)
Robot is just a machine (1/4)
Table 8. Summary of Analysis of Unadmonished Visitors’ Feelings When They Are Admonished by a Robot or a Human
Table 9.
ImpressionReason
Positive (5/5)No particular resistance (3/5)
Robot’s admonishment is easier to accept than a human’s (1/5)
Negative (0)-
Neutral (0)-
Table 9. Summary of Impressions of Admonished Visitors about Robot That Admonishes People
Table 10.
ImpressionReason
Positive (65/75)Merits specific to robot-admonishing service (40/65)
Similar to human-admonishing service (25/65)
Neutral (8/75)Uncertainty about effectiveness (5/8)
Depends on robot’s admonishing language (2/8)
Effective but more embarrassing than human admonishment (1/8)
Negative (2/75)Prefer humans (2/2)
Table 10. Summary of Impressions of Unadmonished Visitors about Robot-Admonishing People

6.3.4 Comparison of Suitability for Queue-Management Role: Robot vs. Human.

We analyzed 72 visitors’ opinions about the suitability of a human vs. a robot for queue management. We identified a variety of views and three types of opinions: a robot is better (23/72), a human is better (19/72), and no difference (30/72). Furthermore, we identified the reasons for their opinions. Table 11 summarizes our analysis.
Table 11.
OpinionReasons
Robot is better 23/72Robots are more acceptable to people (10/23)
Robot’s specific merits (9/23)
Reduces human work load (1/23)
Human is better 19/72Human capabilities (16/19)
Other (3/19)
Neutral 30/72Complimentary merits and capabilities (20/30)
Either is fine (10/30)
Table 11. Summary of Visitor’s Opinion for Comparison of Suitability for Queue Management Roles: Robots vs. Humans
Twenty-three of 72 visitors stated a robot is better for queue-management services than a human for the following reasons:
Robots are more acceptable for people: Ten of 23 visitors said robots will be more accepted by people due to such merits as being less rude and easier to accept, and so forth.
“I think robots are better. They are easier to accept. With a human, there might be trouble, but not with a robot.”
Robot’s specific merits: Nine of 23 visitors described merits specific to robots: being cute, cheap labor, good for children, and wide sensing capability.
“I think a human guard has a bit of a large blind spot. But a robot would have better ability to check out the whole environment and notice something. It has cameras and can make judgements from their information” (robot specific: wide sensing capability).
Reduces human’s workload: Only 1 of 23 visitors said a robot would reduce human’s work load.
Nineteen of 72 visitors believed that humans are better suited for queue-management services for the following reasons:
Human’s capabilities: A majority (16/19) of visitors believe that humans are better due to such human capabilities to act based on specific situations, better judgement, less susceptible to being deceived, fast responses, communication capability, and easier to understand.
“I think that a human would be able to see a person and make better judgments.”
Other: Three of 16 visitors mentioned additional reasons, such as robots have machine-like impressions, humans are more conspicuous due to size, and humans need jobs.
“I think a human guard is better. You know, a robot looks like a machine” (Other: robots give a machine-like impression).
Thirty of 72 visitors expressed a neutral opinion about which is better for queue-management services. They pointed out the following two reasons:
Complimentary merits and capabilities: According to 20/30 visitors, since humans and robots have distinct advantages and skill sets, either one or the other may be required, depending on the situation, to maximize each group’s unique advantages. They believe that robots are better for children’s events because they are cute and will attract their attention. They also pointed out that some children might be scared when there are too many adults. On the other hand, some visitors stated that robots are less effective and irrelevant for adult-only events and humans are more suitable: robots for children’s events and humans for adults’ events. They added that humans are better in crowded, complex, or dangerous situations that require accuracy, fast responses, and intricate communication capabilities; robots are good for simple and repetitive tasks or as an attraction.
“If children are present, such as at this event, they will be interested in the robot. When there are only adults, I think the robot service would be more difficult” (complementary merits and capabilities: robot for children’s events humans for adults’ events).
“Oh, I think both humans and robots are necessary. It would be better to have one person and five robots. Since admonishments sometimes cause troubles, or are done during odd situations, the robot can’t possibly do everything. But for standard operations, like repeating the same action like checking tickets, a robot fits the situation better, I think” (complementary merits and capabilities: humans for complex tasks robots for simple and repetitive tasks).
Either is fine: Ten of 30 visitors said since there is no difference between the services provided by robots or humans, thus either is fine. They believed the robot also has adequate capability for queue management:
“Perhaps either is fine. I don’t think there is that much of a difference.”

6.4 Staff Interview Results

We interviewed four members of the event staff (2 males and 2 females) whose means and standard deviations of ages were 23.75 and 2.75 years. All had 1–2 years of experience working at public events. Only two had prior experience with a robot (i.e., Pepper).
We analyzed their interviews to learn their opinions of our robot and its services. We used the same qualitative analysis approach described in Section 6.3. Their opinions were categorized into exclusive main categories and their reasons as subcategories.

6.4.1 Initial Expectations of Robot.

Two of four members had no expectations for the robot. They said they were unfamiliar with them. The remaining two members expected the robot to record the event with its camera to provide proper evidence (1/4) and interact with children (1/4).

6.4.2 Impressions about Robot Security Guard.

When the staff members were asked about their impressions of robot security guards, three of four expressed a positive impression, and the remaining member gave a neutral impression. Table 12 summarizes the analysis.
Table 12.
ImpressionReason
Positive (3/4)Complements human staff (1/3)
Ushering capability (1/3)
Good for children (1/3)
Neutral (1/4)Interesting to some children; scary to others
Negative (0)-
Table 12. Summary of Staff’s Impression about Robot Security Guard
The three members with positive impressions provided the following reasons. One said that the robot worked as a complement to the staff (1/3) “Another member of the staff who cares about what people can’t do. It was talking to people who were waiting in line separately or who didn’t know where to stand in line.” Another commented on the robot’s ushering capability: “I think the robot was actually guiding the people. After hearing the robot’s utterance, something like ’don’t be shy’, a customer follows the robot’s request, so I thought that it was working properly.” Another described the robot as good for children: “it was easy to tend to children because it looks friendly, so they can come around and watch the robot. They were curious about it.”
The staff member with a neutral impression described the robot as interesting for some children while scary for others: “On the positive side, it captured the children’s attention and I thought they were slightly interested in approaching it, but on the other hand, many children were scared and crying.”

6.4.3 Impression about Robot’s Service.

(1)
Impression about overall service
Two of four members had positive impressions of the overall service provided by the robot, and the remaining two had neutral impressions. The impression of the visitors and their reasons are listed in Table 13.
One of two members who had positive impressions said that a robot’s work could complement the human staff: “it’s not just people, it’s convenient to have someone do it. It is useful that the robot checks (and handles) the end of the queue since I was unable to do so today.” The other member commented on the robot’s capability of queue management: “The good thing is that it told them to get in line like a human and answered questions, which I thought was great.”
The two staff members who had neutral opinions commented on both the robot’s merits and demerits. One mentioned that even although the robot had sufficient capability for ushering, it scared some children. Another described it as a good attraction, but slightly criticized its ushering capability due to its low voice.
(2)
Usefulness of service
When we asked the staff whether the robot’s service was useful, three said it was, and one gave a neutral opinion. Their answers are summarized in Table 14. The three staff members who said the robot service was useful described such merits as the robot’s work complements the human staff, it reduced their work load (“It was a good service for this event because it lowered the work we had to do”), and good for children (“it got along easily with children”).
The following is a neutral opinion of a staff member: “I feel like it’s just another attraction. Not so many people interact with robots on a daily basis, so they can just enjoy a new experience. But in terms of guidance, many of its services aren’t good enough yet.”
Table 13.
ImpressionReason
Positive (2/4)Complements human staff (1/2);
Queue-management capability (1/2)
Neutral (2/4)Merits and demerits (2/2)
Negative (0)-
Table 13. Summary of Staff Impressions about Overall Service of Robot
Table 14.
ImpressionReason
Useful (3/4)Complements human staff (2/3);
Reduces work load (1/3);
Good for children (1/3)
Useless (0)-
Neutral (1/4)Good as an attraction but less capable of ushering
Table 14. Staff Opinions about Usefulness of Robot’s Service

6.4.4 Ushering Service.

(1)
Impression about ushering service
Table 15 summarizes the impressions of the event staff members of the robot’s ushering service. Three of them gave positive impressions. One expressed a negative impression.
All three members who had positive impressions cited the robot’s adequate capability for ushering as their reason for being impressed: “I was amazed that it could guide people properly.” They added that the robot’s standing location made it conspicuous to new visitors. “Because it’s more visible at the end of the line than at the front, it got the attention of new customers.”
The staff member who gave a negative impression thought the robot was less capable of ushering due to such faults in its robot’s ushering behavior as a low voice and using an ineffective ushering strategy. She noticed that some visitors who waited in front of the robot continued to interact with it for a long time without joining the queue: “The robot sometimes said please form a line, but the visitors didn’t obey. They didn’t seem to understand the queue and lined up incorrectly in it. I don’t think the visitors could hear it.” She suggested that “the robot should accompany newcomers to the end of the line.”
(2)
Naturalness of ushering behavior
We asked the event staff whether they felt the robot’s ushering behavior was natural or unnatural. Most (3) said it was natural, although one staff member said judging its naturalness was difficult (Table 16).
The staff members who felt the robot’s behavior was natural gave the following reasons. One thought that even though its ushering behavior was not on the same level as that of humans, it was natural enough for a robot. “It recognized people, although of course not as smoothly as a human. But I think it did recognize people and effectively guided them.” Another felt the robot’s behavior was smooth: “I thought it would talk slowly, but it talked quite smoothly. It also moved well.” Another member stated there were no unnatural aspects about the robot’s behavior: “Nothing bothered me. It felt like it was working as part of our team.”
The staff member who said judging the robot’s behavior was difficult gave the following reason: “Since I have no experience with guiding robots, I don’t feel qualified to comment on that aspect.”
Table 15.
ImpressionReason
Positive (3/4)Adequate capability (3/3)
Negative (1/4)Less capable
Table 15. Summary of Staff’s Impression about Ushering Service
Table 16.
ImpressionReason
Natural (3/4)Natural enough for a robot (1/3);
Smoothness (1/3);
No unnatural aspects (1/3)
Unnatural-
Difficult to judge (1/4)Not familiar with guiding robots
Table 16. Summary of Staff’s Impression of Naturalness of Robot’s Ushering Behavior

6.4.5 Admonishing Service.

We analyzed the staff’s impressions about the robot’s admonishing service under the following two topics:
(1)
Impression about admonishing service
Table 17 summarizes the impressions of the event staff about the robot-admonishing service. Three of four members expressed positive impressions about it; one gave a neutral impression.
One of the three members who expressed a positive impression commented that unlike human admonishments, the robot’s admonishments caused fewer conflicts. “I think some people got offended by being admonished, but you can’t argue much with a robot.” Another staff member described the robot’s admonishment as effective: “The robot spoke clearly, so the parents seemed to understand its request. They carefully looked at their children and admonished them when they goofed off.” Other staff member said they struggled to focus on the visitors in the queue because they were busy with visitors inside the train facility. Thus, robots handling (and admonishing) those in the queue compensated for the staff’s lack of ability (complementing human staff).
The staff member who had a neutral opinion commented on the merits and demerits of the robot’s admonishing services. She pointed out that although its admonishment might be successful with adults, it would probably be less effective with children.
(2)
Suitability for admonishing service (robot vs. human staff)
When asked whether a human or a robot is better suited for admonishment service, the staff had divergent answers. Two of four said the robot is better, one said that the suitability depends on the situations, and the remaining staff member thought it was too difficult to judge. Their opinions and reasons are summarized in Table 18.
The two staff members who thought a robot would be better stated the merits of its admonishing service. One pointed out that a robot’s admonishment is easier for visitors to accept than a human’s. “I think people will listen to robots more readily because they don’t have any emotions.” Another staff member said placing the responsibility of admonishing on robots reduced their own mental burden.
The one who stated a suitable person would depend on the situation commented on the merits of being a human (i.e., human capabilities and a loud voice) and robot (its admonishment is easy to accept because it’s emotionless). Thus, either could be used, depending on the requirement: “I think a robot would be fine inside a room, but if it is noisy or outside, a human might be more effective.”
The staff member who couldn’t decide which was more suitable commented on the following merits and demerits of a robot’s admonishment. As a demerit she stated that a robot’s admonishment is less effective than a human’s due to the former’s low capability: “I think that customers are more likely to listen to a human. There are some things like eye contact, When I (a human) admonish customers, they understand my gaze and think I’m saying something to them. But I think it’s still a little difficult for the robot.” On the other hand, she added that human capacity is also limited, and so the robot’s admonishment is also helpful to compensate for the limitations of human resources.
Table 17.
ImpressionReason
Positive (3/4)Fewer conflicts (1/3);
Effective (1/3);
Complements human staff (1/3)
Neutral (1/4)Merits and demerits
Negative (0)-
Table 17. Summary of Staff’s Impression about Robot’s Admonishing Service
Table 18.
OpinionReason
Robot is better (2/4)Robot’s admonishment is easier to accept than human’s (1/2)
Reduced mental burden on staff (1/2)
Depends on situation (1/4)Complementary merits
Difficult to judge (1/4)Merits and demerits
Table 18. Staff’s Opinions about Suitability for Admonishing Service (Robot vs. Human)

6.4.6 Suitability for Queue-Management Service Robot vs. Human.

We analyzed the event staff’s opinions about who is more suitable for a queue-management service, a robot or a human. All said a suitable person depends on the situation. Table 19 summarizes our analysis.
Table 19.
OpinionReason
Robot is better (0)-
Human is better (0)-
Depends on situation (4/4)Complementary merits (2/4)
Depends on robot’s capabilities (1/4)
Depends on scale of event (1/4)
Table 19. Staff’s Opinions about Suitability for Queue-Management Service of Robots and Humans
Two of four staff members commented on the complementary merits of humans and robots. They pointed out the merits of humans, such as their capabilities and the ability to handle unexpected situations and such merits of robots as attractiveness and fulfilling a lack of human staff. “I think a robot can help reduce the number of staff when we do a certain type of work or guide people around. But when children are running around at an event like this, or when something unexpected happens, a robot alone is not enough.” One staff member said the suitability of a robot depends on its capability. “I think either is fine, but it depends on the robot’s level of speech recognition and capability of answering.” Another said that the more suitable person depends on the scale of the event. He explained that a robot is suitable for less crowded events, and humans are more suitable for more crowded events: “If the event were less crowded, I think robots are helpful, but when are too many people, then a robot would struggle to organize lines instead of a human.”

6.4.7 Intention to Use Robot in Future.

When we asked about the staff’s intentions to use the robot for future events, three of four said they would like to use it, and one said her decision would depend on the situation. Their answers and reasons are presented in Table 20.
Table 20.
AnswerReason
Yes (3/4)Interesting (2/3)
Reduce workload (1/3)
Expecting robot’s capabilities will improve further in future (1/3)
No (0)-
Depends on situation (1/4)Depends on robot’s appearance
Table 20. Staff’s Intention to Use Robot in Future Events
Two of three members who were willing to use the robot commented that it is interesting. “I think it would be fun to have a lot of robots.” One member stated that robot would reduce their workload: “it’s also hard for us to take a break if not enough people are working, but robots can always work, so that might help us feel better.” Another was willing to use the robot due to the expectation that its capabilities will improve in the future: “Since I expect that the technology will improve as we conduct more experiments like this in the future, I think it will be good if robots can be more flexible and do more work.”
The staff member who said her decision would depend on the situation pointed out that she would use the robot if it looked friendlier: “I think its face is scary. It should be changed to look friendlier.”

7 Discussion

7.1 Revisiting Research Questions

The objective of this study was to develop a robot security guard that can manage queues in public spaces. It is challenging to use a robot to control people’s behavior in the actual world because of human reluctance to comply with a robot [27, 38] and the unfavorable perceptions some possess about such robots [22]. To overcome these challenges, we formulated our first research question: “How can we develop an acceptable and effective robot for regulating people in public spaces?” We proposed a design that mimicked a human security guard’s role, expecting our design to help people easily understand its role and improve their compliance with and their acceptance of the robot. Our field-trial results showed that it persuaded individuals to comply with its admonishments and requests by acting like a professional security guard and received their acceptance. Although the self-reported number of visitors is limited, the visitors recognized the robot’s role as a security guard or a member of staff due to its uniform and appearance which motivated them to comply with it. In addition, we observed several visitors spontaneously saying and commenting to the experimenters that the robot is working as a staff member just by observing it. However, it’s unclear whether the robot’s uniform caused those visitors to comply with its requests. Further, our interview analysis suggests that visitors cooperated with the robot because they perceived the features of a professional guard in it, such as having sufficient capability for queue management and that its admonishment resembled a human’s admonishment. Thus, imbuing a professional image in a robot is an effective and acceptable design for a regulatory robot.
Our second research question looked into “How do people in public spaces perceive a robot that attempts to control their behaviors?” Most visitors accepted this queue-managing robot that wanted to manage their behaviors in real life, and its requests and admonishments were convincing enough to follow. Their opinion of the robot was influenced by such factors as its capacity to provide a service, the clarity and reasonability of its requests or admonishments, and its attitude. Furthermore, even though some visitors disobeyed it, our result demonstrates that being caught and admonished by a robot in public did not motivate them to confront it or act aggressively. Thus, despite the limited number of interviews with admonished visitors, our field-trial results suggest that people will welcome a regulatory robot service in society.

7.2 Implications

7.2.1 Implications for Design of Robots That Regulate People.

When robots assume authoritative positions in society, they will be required to control the behaviors of others like human professionals do. However, people typically dislike robots that merely attempt to control them through admonishments and punishments [22]. Such an attitude complicates integrating regulatory robots into society. Our study shows one successful solution to this problem.
Our approach is to exhibit a professional image in the robot, implement admonishing as one functionality among several others, and use admonishments sparingly in unavoidable situations. This design enabled our robot to regulate a crowd in a public space reasonably well where it still received people’s acceptance even though it performed its admonishing functionality. We believe this design created a perception in the people that a regulatory robot provides a reasonable service instead of merely an admonishing machine. We expect that the “exhibiting a professional image” design concept is applicable to contexts where a robot has to perform roles that require a professional image where admonishing is part of its services, such as police officers, managers, teachers, and exam invigilators.
Our research also emphasizes the importance of minimizing admonishing situations and discovering non-confrontational alternatives. Since admonishing creates negative feelings in people and bad impressions of robots, it should be the last option considered by regulatory robots. Robots should try to lower the likelihood of inappropriate behaviors in the first place by clearly and understandably providing its instructions. Our interview findings showed that most visitors are willing to cooperate if they clearly understand the robot’s instructions. On the other hand, confusing instructions increase non-compliance. Another reason for cooperation is predicated on people understanding the robot’s role. Its appearance and behaviors must be designed in a way that makes its role obvious.
Finally, our results imply the potential of deploying robots for regulating people in public spaces. A robot’s unique capabilities, such as its wide sensing ability, its ability to talk to anyone without experiencing social anxiety, and its ability to work long hours will be helpful for its role.

7.2.2 Implications for Design of Robot-Admonishing Functionality.

To begin with, our results suggest that a robot’s admonishment might be effective and acceptable to reduce inappropriate behaviors. This finding implies the potential of using a robot’s admonishments to lower inappropriate behaviors in society, especially for less serious or unintentional behaviors. However, we believe a robot should minimize its use of admonishments since it runs a high risk of fomenting negative impressions.
Furthermore, since an admonishment from a robot is perceived as being easy to accept and less offensive compared to that of a human, robot admonishments could minimize the negative feelings of receivers and such confrontations as arguments and violence that sometimes arise in admonishment scenarios. Therefore, robot admonishments seem especially fruitful for commercial settings like restaurants, shopping centers, and events that are concerned with customer impressions but still need to curb inappropriate behaviors.
Moreover, our interview results showed that a polite and courteous robot is one reason for compliance with and acceptance of its admonishments. People are concerned about the verbal attitude of an admonishing robot, just as in human–human communication. Therefore, a robot with a polite attitude (e.g., using respectful language and a friendly tone of voice) could be a successful approach to achieve an effective and acceptable admonishing service.
Lastly, our study indicates the importance of future studies on developing robots admonishing/instruction giving behaviors suitable for children. Our field study reveals young children (in elementary school aged) were less likely to voluntarily comply to the robot’s requests. As a result, parents seem to act as a mediator and explain the robot’s request to the children and make sure they follow it. This indicates the robot’s admonishing strategy that works for adult could be less effective with children. It would be worthwhile to study the children’s reasons for non-compliance. This could be robot’s speech is not understandable to young children, or robot lacks persuasiveness like an adult. Future studies should also consider to improving robot’s admonishing (or instruction giving) strategies suitable for children. One approach could be modeling the strategies of professionals that dealing with children like teachers or childcare providers.

7.2.3 Admonished vs. Unadmonished Visitor’s Opinions about a Robot-Admonishing Service.

In our field trial, we experienced the rare opportunity to listen to the opinions of some visitors who were admonished by our robot. We roughly compared the opinions of the admonished and unadmonished visitors to gain insight about how being admonished by the robot affected their opinions and the validity of the unadmonished visitors’ expectation in actual admonishing situations. Across all three interview topics about the admonishing service, i.e., obedience, comparisons of feelings, and impressions, the dominant opinions of both groups are similar, with a slight difference in their reasons. Nevertheless, compared to the admonished visitors, more neutral opinions appeared among the unadmonished visitor results due to their lack of familiarity with its admonishing behavior. The following are the details of our comparison for each topic.
Concerning obedience, the majority of unadmonished visitors intended to comply with a robot’s admonishment, consistent with the fact that all the interviewed admonished visitors did obey the robot. Their reasons for obedience included that the admonished visitors tended to talk more about their own actions: “admitting their own mistakes.” Unadmonished visitors equally commented on their intention of admitting a mistake as well as such external influences as the robot’s capability and merits and the presence of others. Furthermore, based on their experiences, the admonished visitors said the robots politely warned them, an outcome that the unadmonished visitors could not imagine due to their lack of exposure to the robot’s admonishing.
In comparing their feelings of receiving a robot’s admonishment with a human’s, unadmonished visitors expressed more positive expectations about a robot-admonishing service. A large majority stated that a robot’s admonishment would be easier to accept than a human’s. However, such a trend is not clear in the results from the admonished visitors. Instead, two dominant opinions emerged: “a robot’s admonishment is easier to accept than a human’s” and “human and robot admonishments feel the same.” It is unclear whether this result reflects the small number (i.e., 6) of admonished visitors in our study or their experiences with a robot’s admonishment. Furthermore, unadmonished visitors expected a human’s admonishment to be more powerful than a robot’s, although none of the admonished visitors gave this opinion. Perhaps the unadmonished visitors imagined a robot’s admonishment for various inappropriate behavior situations, including more serious ones. However, the admonished visitors might have considered their own experiences and felt that such admonishments were powerful enough.
A majority of the unadmonished visitors reported a positive impression of the visitors’ acceptance of the robot’s admonishing service. All the interviewed admonished visitors reported a positive impression of it. Considering their reasons, most of the admonished visitors appeared to believe that a robot-admonishing service resembles a human-admonishing service and claimed to have no particular resistance toward it. The majority of unadmonished visitors commented on the robot’s specific merits.
Based on the above comparison, similar to the majority of unadmonished visitors, the admonished visitors we interviewed had positive impressions of the robot-admonishing service, despite being admonished by a robot. Thus, a robot’s admonishment did not lead to any particular negative impressions from the visitors.

7.2.4 Ethical Implications of Using Security Robots in Public Space.

Our research suggests several ethical implications related to using security guard robots to regulate people’s behavior in public spaces. First, some people might doubt the ability of robots who are still far behind human’s moral and cognitive capabilities to judge human behaviors and admonish. Their doubt could be heightened when considering accepting admonishment from robots, particularly if the people are unaware that they are engaging in inappropriate behaviors. Therefore, it’s necessary to establish trust in security robots and rest assured that they are under human supervision before deploying them in public spaces.
Furthermore, being approached by a security robot in a public space and getting admonished by it could be more unpleasant for some people than being admonished by a human guard. People could be scared due to various reasons such as the unfamiliarity of such machines, the unpredictability of robots’ intentions, and physical appearance. Furthermore, a person could feel ashamed and guilty when a robot who is inferior to them reveals their mistakes. Similarly, robot admonishment in public spaces could be embarrassing for some people, as such incidents can attract more attention from passersby due to the novelty. Therefore, it is important to be careful about people’s feelings when using security robots to regulate people.
Moreover, the presence of a mobile security robot could be scary for small children due to their appearance and movements, and so forth. Therefore, it will be better to design security robots that interact with the general public with a pleasant appearance. Also, even if the robot is completely designed to be safe some parents may concerned about the safety of the children around the robot. It is necessary to improve the perceived safety of such robots.

7.2.5 Challenges during Field Trial and Lessons Learned.

During our field trial in a public event, we met several challenges. Below, we discuss those challenges and our strategies to overcome them.
(1)
Developing the robot system to be robust to various visitor behaviors: One big challenge was developing our robot system to be robust to visitor behaviors. Sometimes, visitor behaviors in the real world are complex and unpredictable. However, the robot should still be able to work reasonably well under such behaviors. During our prototype testing of the robot using hired participants, we realized that while our robot is working well for the ideal behaviors of participants, it is less capable of handling unexpected behaviors (e.g., some visitors may appear to be walking toward the queue to join it, but they intend to talk to the robot, or they just passing through, and some visitors could suddenly leave the queue). We tried to develop an algorithm to recognize visitors’ paths from their trajectories. However, this approach causes a lot of misrecognition. As an alternative, we defined several parameters, performed a series of tests, and tuned them to ensure the robot could handle various visitor behaviors. This testing and tuning process took considerable time and effort.
(2)
Finding a public event for conducting a field trial: Finding a public event for conducting a field trial with the robot and getting approval from event management was challenging and time-consuming. Therefore, it is advisable to start the field trial location searching process in advance. The time for the search process depends on the availability of the events. Once a candidate event is found, it is necessary to negotiate with event management to get their approval. This kind of conversation covers our field trial plan as well as the robot’s service (or providing a demonstration). Important details like the safety procedure during the field trial, our requirements, and the data that is expected to be captured should all be included in the plan. Sometimes, when negotiating, event management may request some changes to the robot and the field trial plan. It is also important to confirm in advance the right management. Especially if the robot is tested in commercial events, certain permissions need to be received to include brand names in the robot’s utterances, costumes, and so forth.
(3)
Deciding the robot’s conversation length to avoid disturbing the robot’s service: Another challenge we faced was determining the appropriate conversation length with visitors so as not to interfere with the robot services. Engaging in lengthy conversations with visitors is not the primary goal for queue-managing robots (and similarly for some other robot services such as patrolling and food delivery). Such long interactions may interfere with the robot’s ability to perform its main duties. Some visitors do, however, wait around the robot and make prolonged attempts to engage with the robot. As a solution, we limited the length of the robot’s conversation. However, while this approach reduced the visitor’s interaction length, it also led to the disappointment of some visitors. Therefore, future works should be considered to study determining the appropriate length of conversation that does not interfere with robot duties and does not disappoint visitors. In addition, It is worth studying appropriate alternative strategies to handle visitors who attempt to engage in long interactions that potentially interfere with robots’ duties.

7.3 Open Questions

7.3.1 Ethical Concerns of Regulatory Robots.

One remaining important open question is the ethical concern of applying regulatory robot services in our future societies. First, it’s unclear whether allowing a robot or a machine to judge human behavior is socially acceptable. In our study, a human operator confirmed the robot’s detection of inappropriate behaviors and gave it permission to make an admonishment. While some might believe that robots have adequate capability for such tasks and can even do them more fairly than humans, a portion is resistance to allowing robots to judge human behaviors. Considering issues of responsibility, we personally believe such decision-making processes should be done by humans or under their supervision.
Second, it is unclear who should take the responsibility for any adverse effects on people due to the controlling behaviors of robots. Our findings show that some people were intimidated and embarrassed when admonished by a robot. If a robot mistakenly admonishes a visitor who did not engage in any inappropriate behavior and if any mental anguish/pain were caused, someone must take responsibility and compensate the injured person. The question of who bears responsibility—robot developers, employers, or another party—remains unresolved.

7.3.2 For Which Contexts Are Regulatory Robots Suitable?

Our findings reveal that the services of regulatory robots are suitable for children’s events attended by families with young school-aged children. It is an open question in which other context robots can perform regulatory tasks on behalf of humans. A robot’s suitability for a particular context depends on many factors, including its effectiveness, its social acceptance, and its value in a specific location.
A robot’s effectiveness in a particular context depends on many aspects, including the task complexity and the nature of the people in the context. We believe a robot will be less effective in situations that demand fast responses, intricate communication skills, and where serious inappropriate behaviors are likely due to technical immaturities and less powerful admonishment skills compared to humans. Furthermore, such visitor characteristics as their intention to cooperate and the ability to understand a robot’s requests will influence its effectiveness. In our case, according to the visitor opinions, it is plausible that parents behaved well around their children as role models, and therefore, they complied with the robot’s guidance. On the other hand, if only adults were present, incidents of ignoring the robot would undoubtedly increase [38]. Similarly, a robot will be less effective in environments without any adults because children generally show less compliance [8] unless they are guided by adults.
It’s also crucial to consider whether using a robot for regulatory services in a certain situation is socially acceptable. Since social acceptance is a complex concept, it is preferable to conduct a detailed investigation of potential users’ opinions before deploying a robot in a specific context and only applying it if it is accepted.
In addition, a robot’s value in a particular context determines its suitability. For instance, robots will have a higher value in locations with a majority of children who manifest great interest in them. Furthermore, viewing a robot under a positive light is good for interacting with children because it implies a potential use for robots in such contexts.

7.3.3 For Which Inappropriate Behaviors Is a Robot’s Admonishment Effective?

Our results showed that a simple admonishment from a robot is effective for situations featuring less serious or unintentional inappropriate behaviors. However, it remains unclear what other kinds of inappropriate behaviors robot admonishment can effectively reduce. Compliance with a robot’s admonishment depends on the nature of the inappropriate behavior being prohibited. When people are admonished to act in a certain way, they could feel their freedom to act as they desire is threatened and experience an unpleasant motivational state (i.e., psychological reactance). Such reactance can motivate them to act to restore their freedom through such actions as refusing to comply or to behave in an aggressive way toward those who pose a threat [44]. The importance of threats to freedom [6] is one factor that determines the amount of reactance. In other words, how badly do they want to do the prohibited action? In our situation, urging visitors to move ahead and lining up properly were not seen as restrictions on crucial freedom. Thus, they might feel less resistant to complying with a robot’s admonishment. If the importance of the threatening behavior is high, for instance, admonishing a visitor who is simultaneously walking and using a smartphone, perhaps he/she will ignore the robot.
Furthermore, in our field trial, most of the people who made inappropriate behaviors did so unintentionally. Therefore, a warning from the robot helped them realize their mistake and led to corrections. On the other hand, people who intentionally engage in inappropriate behavior tend to trivialize the robot [38]. We anticipate that a robot’s admonishment will not be forceful enough to prevent those who intentionally engage in serious inappropriate behaviors, such as smuggling prohibited items into a stadium, situations even human security guards struggle to resolve.

7.4 Future Role of Operator

We used a human operator to compensate for the technical limitations of our robot. We believe that with future technological advancements, most of the operator’s duties will become fully autonomous or their performance will be significantly improved. The operator’s effort on one robot will be significantly reduced, enabling him to simultaneously control multiple robots. However, we don’t expect operators to be completely eliminated due to ethical considerations. Currently, the operator performs four types of duties: updating queue area settings, resolving system errors, confirming admonishing targets, and speech recognition.
We expect that the selection of queue area settings will be automated with reasonable accuracy, and speech recognition tasks can be delegated to a robust ASR system. Choosing appropriate queue area settings based on the crowd conditions is currently a primary task of the operator. Although we did not emphasize the automation of this task because resolving such technical difficulties was outside the main thrust of our study, we believe that such functionality can be implemented in robots to update the queue area by detecting the crowd’s condition. Furthermore, speech recognition was entirely carried out by the operator due to poor ASR accuracy in the highly noisy environments of public events. Future ASR systems might perform much better in such noisy environments.
Considering the practical limitations, we believe an operator will still be needed to confirm inappropriate behavior detection and handle errors. First, the goal of admonishing target detection should be absolutely no errors because a robot that admonishes a person without a legitimate reason might lead to a conflict. However, a detection algorithm that achieves 100% accuracy seems impossible in real environments. Even if we could achieve accuracy, some people might not accept the idea of allowing AI to judge human morality. A human operator might still need to make the final judgement [13]. Second, an operator assistance may be required to fix any system errors since we cannot anticipate an error-free system. Even with our slightly current “crude” technology, the operator spent minimal time on error correction. We believe that based on the future development of technologies, systems will become more robust, which will further reduce the number of operator interventions.

7.5 Limitations

Our study has several limitations. First, we modeled the ushering behavior of only one professional guard. Perhaps other effective strategies could have been incorporated into our study.
Second, the interview results of visitors were affected by the self-selection bias [20], as they chose to participate in the interviews themselves. Unfortunately, such situations are unavoidable in field studies. Self-selection bias may cause visitors’ interview results to be skewed toward a positive spectrum, running the risk of overly optimistic conclusions if not careful. While generally in a self-selection situation, people who have strong positive or negative impressions [20] about the robot are expected to participate, our interview results show that the number of negative opinions is relatively low. This could be due to reasons such as the visitors with negative opinions not participating in the interview due to the context of the amusement event (they may hesitate to discuss negative things in an enjoyable context) or a lack of visitors with strong negative impressions. Therefore, our interview results could underrepresent the concerns and negative opinions of such robots. Furthermore, the visitors who accepted the interviews could be those interested in new technology and the robot itself; indeed, some of them mentioned it. Such visitors may be more open to ideas such as regulatory robot services than those who do not have a favorable impression of robots in general. Therefore, there is a risk that they overlook the demerits of our robot and instead report the positive aspects of it. Furthermore, we didn’t learn the opinions of most of the admonished visitors (including those who disobeyed) for such reasons as declining our interview requests or the interviewers were pre-occupied with other visitors. If we could gather such opinions, our interview results might be different or even consist of more negative opinions.
Third, our interview results and observations might not accurately represent the general public’s response to the robot because we only tested our robot at a children’s event with families. Perhaps, the robot may have received more favorable responses, from families with children. The parents tend to be more polite in front of their children. Similarly, the children are highly unlikely to bully or ignore the robot with their parents standing next to them. The response of other groups to our robot is unknown. Furthermore, our interview results are limited to the staff members of one particular event. Thus, their opinions cannot be generalized to staff that serve at different types of events.
Finally, our findings cannot be directly applied to other countries with different cultures. People’s reactions to the robot and their opinions are highly biased by their own cultural backgrounds. Their expectations about a security robot align with a human security guard’s role. We conducted this study in Japan where human security guards are unarmed, play a friendlier role, and are also engaged in customer service. People are familiar with such services and cooperate with a guard’s request. Therefore, some people might comply with a robot in the same way that they would with a human guard. On the other hand, in many countries, security guards do not play such a friendly role. They are equipped with weapons and are limited to security-related tasks. In such places, applying our robot (unarmed and friendly-looking) for queue management or crowd handling might be less effective because people might easily ignore its requests. Therefore, to apply a queue-managing robot in other countries, we must modify its design based on the cultural context and people’s expectations.

8 Conclusion

Our study developed an effective and acceptable robot for managing a queue of people in public spaces. We proposed a design that mimicked the role of Japanese security guards who usually provide a queue-management service. Our design concept created an image of a professional security guard in our robot by implementing three features of human guards: their duties, their professional behavior, and their professional appearance. The robot led visitors to the queue’s end, admonished those who engaged in inappropriate behaviors, answered questions, and made announcements. We conducted a 10-day field trial at a children’s amusement event to investigate how people perceived a robot that regulated their daily behaviors. Our robot interacted with 2,486 visitors and during 52 of 54 incidents, visitors complied with its admonishments. We listened to the opinions of visitors, including both those who were ushered and admonished by the robot as well as the event staff. By exhibiting the image of a professional security guard, our findings suggest that our robot can indeed regulate visitors in a queue reasonably well and still receive their acceptance. Unfortunately, we didn’t interview nearly enough admonished visitors. Our limited interviews suggest they had a positive attitude about the robot, despite being admonished by it. Furthermore, unadmonished visitors showed a higher intention to comply with a robot’s admonishment in the future. Thus, we believe that “creating a professional image” is one successful design approach for a robot that is supposed to fill authoritative roles in future societies.

Acknowledgments

We are grateful to the Manager of ATC shopping mall and the manager and staff of the amusement event “Let’s Go Thomas!” for allowing us to conduct a field trial at their premises. Furthermore, we would like to thank Kanako Tomita, Junko Tachibana, Yasue Miyazaki, and Yukiko Takei from ATR for supporting the field trial and data analysis.

References

[1]
[2]
Iina Aaltonen, Anne Arvola, Päivi Heikkilä, and Hanna Lammi. 2017. Hello Pepper, may I tickle you? Children’s and adults’ responses to an entertainment robot at a shopping mall. In Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, 53–54.
[3]
Muneeb I. Ahmad and Reem Refik. 2022. “No Chit Chat!” A Warning from a Physical Versus Virtual Robot Invigilator: Which Matters Most? Frontiers in Robotics and AI 9 (2022). DOI: https://doi.org/10.3389/frobt.2022.908013
[4]
Cyrus Anderson, Xiaoxiao Du, Ram Vasudevan, and Matthew Johnson-Roberson. 2019. Stochastic sampling simulation for pedestrian trajectory prediction. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 4236–4243.
[5]
Leonard Bickman. 1974. The social power of a uniform 1. Journal of Applied Social Psychology 4, 1 (1974), 47–61.
[6]
Sharon S. Brehm and Jack W. Brehm. 2013. Psychological reactance: A theory of freedom and control. Academic Press.
[7]
Susannah Breslin. [n.d.]. Meet the Terrifying New Robot Cop That’s Patrolling Dubai. Retrieved from https://www.forbes.com/sites/susannahbreslin/2017/06/03/robot-cop-dubai/
[8]
Dražen Brščić, Hiroyuki Kidokoro, Yoshitaka Suehiro, and Takayuki Kanda. 2015. Escaping from children’s abuse of social robots. In Proceedings of the 10th Annual ACM/IEEE International Conference on Human-Robot Interaction, 59–66.
[9]
Brad J. Bushman. 1988. The effects of apparel on compliance: A field experiment with a female authority figure. Personality and Social Psychology Bulletin 14, 3 (1988), 459–467.
[10]
Yingfeng Chen, Feng Wu, Wei Shuai, and Xiaoping Chen. 2017. Robots serve humans in public places—KeJia robot as a shopping assistant. International Journal of Advanced Robotic Systems 14, 3 (2017), 1729881417703569.
[11]
Mengdi Chu, Keyu Zong, Xin Shu, Jiangtao Gong, Zhicong Lu, Kaimin Guo, Xinyi Dai, and Guyue Zhou. 2023. Work with AI and work for AI: Autonomous vehicle safety drivers’ lived experiences. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, 1–16.
[12]
Derek Cormier, Gem Newman, Masayuki Nakane, James E Young, and Stephane Durocher. 2013. Would you do as a robot commands? An obedience study for human-robot interaction. In Proceedings of the International Conference on Human-Agent Interaction, 1–3.
[13]
Sachi Edirisinghe, Satoru Satake, and Takayuki Kanda. 2023. Field trial of a shopworker robot with friendly guidance and appropriate admonishments. ACM Transactions on Human-Robot Interaction 12, 3 (2023), 1–37.
[14]
Jodi Forlizzi, Thidanun Saensuksopa, Natalie Salaets, Mike Shomin, Tekin Mericli, and Guy Hoffman. 2016. Let’s be honest: A controlled field study of ethical behavior in the presence of a robot. In Proceedings of the 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). IEEE, 769–774.
[15]
Yuxiang Gao and Chien-Ming Huang. 2022. Evaluation of socially-aware robot navigation. Frontiers in Robotics and AI 8 (2022), 721317.
[16]
Dylan F. Glas, Takayuki Kanda, Hiroshi Ishiguro, and Norihiro Hagita. 2011. Teleoperation of multiple social robots. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans 42, 3 (2011), 530–544.
[17]
Ulla H. Graneheim and Berit Lundman. 2004. Qualitative content analysis in nursing research: Concepts, procedures and measures to achieve trustworthiness. Nurse Education Today 24, 2 (2004), 105–112.
[18]
Giorgio Grisetti, Cyrill Stachniss, and Wolfram Burgard. 2007. Improved techniques for grid mapping with rao-blackwellized particle filters. IEEE Transactions on Robotics 23, 1 (2007), 34–46.
[19]
Guy Hoffman, Jodi Forlizzi, Shahar Ayal, Aaron Steinfeld, John Antanitis, Guy Hochman, Eric Hochendoner, and Justin Finkenaur. 2015. Robot presence and human honesty: Experimental evidence. In Proceedings of the 10th Annual ACM/IEEE International Conference on Human-Robot Interaction, 181–188.
[20]
Nan Hu, Paul A Pavlou, and Jie Zhang. 2017. On self-selection biases in online product reviews. MIS Quarterly 41, 2 (2017), 449–475.
[21]
Takamasa Iio, Satoru Satake, Takayuki Kanda, Kotaro Hayashi, Florent Ferreri, and Norihiro Hagita. 2020. Human-like guide robot that proactively explains exhibits. International Journal of Social Robotics 12 (2020), 549–566.
[22]
Himavath Jois and Alan R. Wagner. 2021. What happens when robots punish? Evaluating human task performance during robot-initiated punishment. ACM Transactions on Human-Robot Interaction (THRI) 10, 4 (2021), 1–18.
[23]
Ela Liberman-Pincu, Amit David, Vardit Sarne-Fleischmann, Yael Edan, and Tal Oron-Gilad. 2021. Comply with me: Using design manipulations to affect human–robot interaction in a COVID-19 officer robot use case. Multimodal Technologies and Interaction 5, 11 (2021), 71.
[24]
Lucia Liu, Daniel Dugas, Gianluca Cesari, Roland Siegwart, and Renaud Dubé. 2020. Robot navigation in crowded environments using deep reinforcement learning. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 5671–5677.
[25]
Christoforos Mavrogiannis, Alena M. Hutchinson, John Macdonald, Patrícia Alves-Oliveira, and Ross A. Knepper. 2019. Effects of distinct robot navigation strategies on human behavior in a crowded environment. In Proceedings of the 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 421–430.
[26]
Christoforos I. Mavrogiannis, Wil B. Thomason, and Ross A. Knepper. 2018. Social momentum: A framework for legible navigation in dynamic multi-agent environments. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, 361–369.
[27]
Kazuki Mizumaru, Satoru Satake, Takayuki Kanda, and Tetsuo Ono. 2019. Stop doing it! Approaching strategy for a robot to admonish pedestrians. In Proceedings of the 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 449–457.
[28]
Omar Mubin, Massimiliano Cappuccio, Fady Alnajjar, Muneeb Imtiaz Ahmad, and Suleman Shahid. 2020. Can a robot invigilator prevent cheating? AI & Society 35 (2020), 981–989.
[29]
Junya Nakanishi, Itaru Kuramoto, Jun Baba, Kohei Ogawa, Yuichiro Yoshikawa, and Hiroshi Ishiguro. 2020. Continuous hospitality with social robots at a hotel. SN Applied Sciences 2 (2020), 1–13.
[30]
Eyewitness News. 2021. NYPD’s ‘Digidog’ returned, ‘put down’ after viral outrage. Retrieved June 6, 2021 from https://abc7ny.com/digidog-robodog-robot-dog-nypd/10559908/
[31]
Marketta Niemelä, Päivi Heikkilä, and Hanna Lammi. 2017. A social service robot in a shopping mall: Expectations of the management, retailers and consumers. In Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, 227–228.
[32]
Billy Okal and Kai O. Arras. 2016. Learning socially normative robot navigation behaviors with bayesian inverse reinforcement learning. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2889–2895.
[33]
Samson O. Oruma, Mary Sánchez-Gordón, Ricardo Colomo-Palacios, Vasileios Gkioulos, and Joakim K Hansen. 2022. A systematic review on social robots in public spaces: Threat landscape and attack surface. Computers 11, 12 (2022), 181.
[34]
Sofia Petisca, Iolanda Leite, Ana Paiva, and Francisco Esteves. 2022. Human dishonesty in the presence of a robot: The effects of situation awareness. International Journal of Social Robotics 14, 5 (2022), 1211–1222.
[35]
Laurel D. Riek. 2012. Wizard of oz studies in hri: A systematic review and new reporting guidelines. Journal of Human-Robot Interaction 1, 1 (2012), 119–136.
[36]
Satoru Satake, Kotaro Hayashi, Keita Nakatani, and Takayuki Kanda. 2015. Field trial of an information-providing robot in a shopping mall. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 1832–1839.
[37]
Satoru Satake, Takayuki Kanda, Dylan F. Glas, Michita Imai, Hiroshi Ishiguro, and Norihiro Hagita. 2009. How to approach humans? Strategies for social robots to initiate interaction. In Proceedings of the 4th ACM/IEEE International Conference on Human Robot Interaction, 109–116.
[38]
Sebastian Schneider, Yuyi Liu, Kanako Tomita, and Takayuki Kanda. 2022. Stop ignoring me! On fighting the trivialization of social robots in public spaces. ACM Transactions on Human-Robot Interaction (THRI) 11, 2 (2022), 1–23.
[39]
Kyung Hwa Seo and Jee Hye Lee. 2021. The emergence of service robots at restaurants: Integrating trust, perceived risk, and satisfaction. Sustainability 13, 8 (2021), 4431.
[40]
Chao Shi, Satoru Satake, Takayuki Kanda, and Hiroshi Ishiguro. 2016. How would store managers employ social robots?. In Proceedings of the 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 519–520.
[41]
Masahiro Shiomi, Takayuki Kanda, Hiroshi Ishiguro, and Norihiro Hagita. 2006. Interactive humanoid robots for a science museum. In Proceedings of the 1st ACM SIGCHI/SIGART Conference on Human-Robot Interaction, 305–312.
[42]
Masahiro Shiomi, Daisuke Sakamoto, Takayuki Kanda, Carlos Toshinori Ishi, Hiroshi Ishiguro, and Norihiro Hagita. 2011. Field trial of a networked robot at a train station. International Journal of Social Robotics 3 (2011), 27–40.
[43]
Masahiro Shiomi, Francesco Zanlungo, Kotaro Hayashi, and Takayuki Kanda. 2014. Towards a socially acceptable collision avoidance for a mobile robot navigating among pedestrians using a pedestrian model. International Journal of Social Robotics 6, 3 (2014), 443–455.
[44]
Christina Steindl, Eva Jonas, Sandra Sittenthaler, Eva Traut-Mattausch, and Jeff Greenberg. 2015. Understanding psychological reactance. Zeitschrift für Psychologie (2015). DOI:
[45]
Gabriele Trovato, Alexander Lopez, Renato Paredes, Diego Quiroz, and Francisco Cuellar. 2019. Design and development of a security and guidance robot for employment in a mall. International Journal of Humanoid Robotics 16, 05 (2019), 1950027.
[46]
Araceli Vega, Luis J. Manso, Ramón Cintas, and Pedro Núñez. 2019a. Planning human-robot interaction for social navigation in crowded environments. In Advances in Physical Agents: Proceedings of the 19th International Workshop of Physical Agents (WAF 2018) Springer, 195–208.
[47]
Araceli Vega, Luis J. Manso, Douglas G. Macharet, P.ablo Bustos, and Pedro Núñez. 2019b. Socially aware robot navigation system in human-populated and interactive environments based on an adaptive spatial density function and space affordances. Pattern Recognition Letters 118 (2019), 72–84.
[48]
Araceli Vega-Magro, Luis Vicente Calderita, Pablo Bustos, and P. Núñez. 2020. Human-aware robot navigation based on time-dependent social interaction spaces: A use case for assistive robotics. In Proceedings of the 2020 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC). IEEE, 140–145.
[49]
Sachie Yamada, Takayuki Kanda, and Kanako Tomita. 2020. An escalating model of children’s robot abuse. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, 191–199.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Transactions on Human-Robot Interaction
ACM Transactions on Human-Robot Interaction  Volume 13, Issue 4
December 2024
492 pages
EISSN:2573-9522
DOI:10.1145/3613735
Issue’s Table of Contents
This work is licensed under a Creative Commons Attribution International 4.0 License.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 28 October 2024
Online AM: 25 July 2024
Accepted: 06 June 2024
Revised: 02 April 2024
Received: 03 August 2023
Published in THRI Volume 13, Issue 4

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Queue-managing robot
  2. admonishment
  3. field trial
  4. security guard robot
  5. robot services in public space

Qualifiers

  • Research-article

Funding Sources

  • JST CREST
  • JST Moonshot R & D

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 422
    Total Downloads
  • Downloads (Last 12 months)422
  • Downloads (Last 6 weeks)149
Reflects downloads up to 03 Jan 2025

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Full Access

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media