[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 

Intelligent Human–Robot Interaction: 3rd Edition

A special issue of Biomimetics (ISSN 2313-7673). This special issue belongs to the section "Locomotion and Bioinspired Robotics".

Deadline for manuscript submissions: 31 March 2025 | Viewed by 3565

Special Issue Editors


E-Mail Website
Guest Editor
School of Information Engineering, Wuhan University of Technology, Wuhan 430070, China
Interests: intelligent remanufacturing technology; robotics and automation; human-machine collaboration; optical fiber sensing and intelligent sensing technology; mechanical equipment condition monitoring and fault diagnosis
Special Issues, Collections and Topics in MDPI journals
School of Mechanical and Electronic Engineering, Wuhan University of Technology, Wuhan 430070, China
Interests: fiber optic sensing; robot force/position hybrid control; special robot
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Human–robot interaction (HRI) is a multi-disciplinary field that encompasses artificial intelligence, robotics, human–computer interaction, machine vision, natural language understanding, and social science. With the rapid development of AI and robotics, intelligent HRI has become an increasingly attractive issue in the field of robotics.

Intelligent HRI involves many challenges in science and technology, particularly in human-centered aspects. These include human expectations, attitudes towards, and perceptions of robots; the safety, acceptability, and comfort with robotic behaviors; and the closeness of robots to humans. On the other hand, it is desired for robots to understand the attention, intention, and even emotion of humans and make prompt corresponding responses with the support of AI. Achieving excellent intelligent HRI requires R&D in this multi- and cross-disciplinary field, with efforts expected in all relevant aspects including actuation, sensing, perception, control, recognition, planning, learning, AI algorithms, intelligent IO, integrated systems, and so on.

The aim of this Special Issue is to reveal new concepts, ideas, findings, and the latest achievements in both theoretical research and technical development in intelligent HRI. We invite scientists and engineers from robotics, AI, computer science, and other relevant disciplines to present the latest results of their research and development in the field of intelligent HRI. The topics of interest include, but are not limited to, the following:

  • Intelligent sensors and systems;
  • Bio-inspired sensing and learning;
  • Multi-modal perception and recognition;
  • Social robotics;
  • Autonomous behaviors of robots;
  • AI algorithms in robotics;
  • Collaboration between humans and robots;
  • Advances and future challenges of HRI.

Prof. Dr. Jun Huang
Dr. Ruiya Li
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Biomimetics is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • intelligent sensors and systems
  • bio-inspired sensing and learning
  • multi-modal perception and recognition
  • social robotics
  • autonomous behaviors of robots
  • AI algorithms in robotics
  • collaboration between humans and robots
  • advances and future challenges of HRI

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 2630 KiB  
Article
Multimodal Deep Learning Model for Cylindrical Grasp Prediction Using Surface Electromyography and Contextual Data During Reaching
by Raquel Lázaro, Margarita Vergara, Antonio Morales and Ramón A. Mollineda
Biomimetics 2025, 10(3), 145; https://doi.org/10.3390/biomimetics10030145 - 27 Feb 2025
Viewed by 164
Abstract
Grasping objects, from simple tasks to complex fine motor skills, is a key component of our daily activities. Our approach to facilitate the development of advanced prosthetics, robotic hands and human–machine interaction systems consists of collecting and combining surface electromyography (EMG) signals and [...] Read more.
Grasping objects, from simple tasks to complex fine motor skills, is a key component of our daily activities. Our approach to facilitate the development of advanced prosthetics, robotic hands and human–machine interaction systems consists of collecting and combining surface electromyography (EMG) signals and contextual data of individuals performing manipulation tasks. In this context, the identification of patterns and prediction of hand grasp types is crucial, with cylindrical grasp being one of the most common and functional. Traditional approaches to grasp prediction often rely on unimodal data sources, limiting their ability to capture the complexity of real-world scenarios. In this work, grasp prediction models that integrate both EMG signals and contextual (task- and product-related) information have been explored to improve the prediction of cylindrical grasps during reaching movements. Three model architectures are presented: an EMG processing model based on convolutions that analyzes forearm surface EMG data, a fully connected model for processing contextual information, and a hybrid architecture combining both inputs resulting in a multimodal model. The results show that context has great predictive power. Variables such as object size and weight (product-related) were found to have a greater impact on model performance than task height (task-related). Combining EMG and product context yielded better results than using each data mode separately, confirming the importance of product context in improving EMG-based models of grasping. Full article
(This article belongs to the Special Issue Intelligent Human–Robot Interaction: 3rd Edition)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Seven zones for surface EMG placement from [<a href="#B24-biomimetics-10-00145" class="html-bibr">24</a>].</p>
Full article ">Figure 2
<p>CDF with the proposed cut-off threshold (700 samples), along with the percentage of rejected samples (7.77%).</p>
Full article ">Figure 3
<p>Data distributions by class and contextual data: (<b>a</b>) Weight. (<b>b</b>) Task height.</p>
Full article ">Figure 4
<p>Comparison of SPAN distributions by class: (<b>a</b>) Span 1, main span of the product. (<b>b</b>) Span 2, secondary span of the product.</p>
Full article ">Figure 5
<p>(<b>a</b>) CNN for EMG signals; (<b>b</b>) FC neural network for contextual data.</p>
Full article ">Figure 6
<p>Hybrid model architecture (M_HYBRID).</p>
Full article ">Figure 7
<p>Training results of the models: (<b>a</b>) EMG, (<b>b</b>) contextual, (<b>c</b>) hybrid.</p>
Full article ">Figure 8
<p>Confusion matrices of the models: (<b>a</b>) EMG, (<b>b</b>) contextual, (<b>c</b>) hybrid.</p>
Full article ">Figure 9
<p>Comparison of accuracy and loss for hybrid models. The dashed lines indicate the best values achieved.</p>
Full article ">
25 pages, 1681 KiB  
Article
Multi-Modal Social Robot Behavioural Alignment and Learning Outcomes in Mediated Child–Robot Interactions
by Paul Baxter
Biomimetics 2025, 10(1), 50; https://doi.org/10.3390/biomimetics10010050 - 14 Jan 2025
Viewed by 676
Abstract
With the increasing application of robots in human-centred environments, there is increasing motivation for incorporating some degree of human-like social competences. Fields such as psychology and cognitive science not only provide guidance on the types of behaviour that could and should be exhibited [...] Read more.
With the increasing application of robots in human-centred environments, there is increasing motivation for incorporating some degree of human-like social competences. Fields such as psychology and cognitive science not only provide guidance on the types of behaviour that could and should be exhibited by the robots, they may also indicate the manner in which these behaviours can be achieved. The domain of social child–robot interaction (sCRI) provides a number of challenges and opportunities in this regard; the application to an educational context allows child-learning outcomes to be characterised as a result of robot social behaviours. One such social behaviour that is readily (and unconsciously) used by humans is behavioural alignment, in which the behaviours expressed by one person adapts to that of their interaction partner, and vice versa. In this paper, the role that robot non-verbal behavioural alignment for their interaction partner can play in the facilitation of learning outcomes for the child is examined. This behavioural alignment is facilitated by a human memory-inspired learning algorithm that adapts in real-time over the course of an interaction. A large touchscreen is employed as a mediating device between a child and a robot. Collaborative sCRI is emphasised, with the touchscreen providing a common set of interaction affordances for both child and robot. The results show that an adaptive robot is capable of engaging in behavioural alignment, and indicate that this leads to greater learning gains for the children. This study demonstrates the specific contribution that behavioural alignment makes in improving learning outcomes for children when employed by social robot interaction partners in educational contexts. Full article
(This article belongs to the Special Issue Intelligent Human–Robot Interaction: 3rd Edition)
Show Figures

Figure 1

Figure 1
<p>A humanoid robot (Aldebaran Nao is shown) interacting with a virtual object displayed on the touchscreen: in the example interaction shown, an image is to be moved from an initial position (I) to a goal position (G), along a defined path (blue line). The robot does not make direct contact with the screen, but instead coordinates its movement with that of the virtual object shown on screen.</p>
Full article ">Figure 2
<p>Constructing Bezier curve paths for virtual objects: the control points <span class="html-italic">a</span> and <span class="html-italic">b</span> are defined on the circumference of a unit radius circle around the initial (<span class="html-italic">I</span>) and goal (<span class="html-italic">G</span>) points of the movement, respectively. (<b>a</b>) Standard curve; (<b>b</b>) Bezier curve giving the impression of the robot ‘changing its mind’ on goal location mid-move by setting the first Bezier control point in the opposite direction to the goal location; (<b>c</b>) converting Bezier curve to robot control behaviour using three intermediate Bezier curve parameter values; (<b>d</b>) intermediate points are closer together on sharp curves facilitating robot control.</p>
Full article ">Figure 3
<p>Connection structure of DAIM network used in the present study. (<b>a</b>) Three touchscreen-oriented modalities (delay, accuracy, and speed) are used, each constituting a modality. In addition to this, a user model modality serves to bind multi-modal information. Network structure is developed and adapted through the interaction (shaded regions indicate relationships between modalities in which associative links, e.g., <math display="inline"><semantics> <msub> <mi>L</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> </semantics></math>, can form: note that all modalities can form associative links with all others). (<b>b</b>) Robot move parameters (in the adaptive condition) are obtained by probing the user model and reading out the units with the highest activation level in each of the modalities.</p>
Full article ">Figure 4
<p>Typical setup of the experiment room: the experimenter and wizard remained out of direct line of sight during an interaction. The pre- and post-tests were completed in the same room as the interactions took place. Not to scale.</p>
Full article ">Figure 5
<p>Two sample interactions showing how the children and robot collaboratively interacted around the touchscreen.</p>
Full article ">Figure 6
<p>Example set of food images shown on the touchscreen during the sorting task with the two categories used (low and high carbohydrate content). Visual and auditory feedback is given upon classification events: the green tick shown denotes a recent correct classification.</p>
Full article ">Figure 7
<p>Mean difference between first and third thirds of the interaction, for each modality: (<b>a</b>) delay between touchscreen moves, (<b>b</b>) move success (classification), (<b>c</b>) touchscreen-oriented move speed. A convergence is seen both for delay and success rate, though there is a moderate divergence for move speed. Error bars show 95% CI.</p>
Full article ">Figure 8
<p>Summary of the Alignment Factors found for each individual child, split by condition: Adaptive and Benchmark. There is an overall alignment effect apparent (orange point/line), which is marginally higher in the Adaptive condition. Error bars are 95% CI.</p>
Full article ">Figure 9
<p>Mean learning gains of the Adaptive and Benchmark conditions. Orange point and dashed line: overall mean across both conditions. Error bars show 95% CI.</p>
Full article ">Figure 10
<p>The relationship between Alignment Factor and mean normalised Learning Gain for the two conditions. Positive Alignment Factors indicate that the alignment of behaviours took place (over the three touchscreen-oriented behaviours examined); positive normalised learning gains indicate that learning took place. The orange line shows a linear trend line when both conditions are taken together, suggesting an overall positive correlation between the Alignment Factor and Learning Gain.</p>
Full article ">
24 pages, 6055 KiB  
Article
Analyzing the Impact of Responding to Joint Attention on the User Perception of the Robot in Human-Robot Interaction
by Jesús García-Martínez, Juan José Gamboa-Montero, José Carlos Castillo and Álvaro Castro-González
Biomimetics 2024, 9(12), 769; https://doi.org/10.3390/biomimetics9120769 - 18 Dec 2024
Viewed by 979
Abstract
The concept of joint attention holds significant importance in human interaction and is pivotal in establishing rapport, understanding, and effective communication. Within social robotics, enhancing user perception of the robot and promoting a sense of natural interaction with robots becomes a central element. [...] Read more.
The concept of joint attention holds significant importance in human interaction and is pivotal in establishing rapport, understanding, and effective communication. Within social robotics, enhancing user perception of the robot and promoting a sense of natural interaction with robots becomes a central element. In this sense, emulating human-centric qualities in social robots, such as joint attention, defined as the ability of two or more individuals to focus on a common event simultaneously, can increase their acceptability. This study analyses the impact on user perception of a responsive joint attention system integrated into a social robot within an interactive scenario. The experimental setup involves playing against the robot in the “Odds and Evens” game under two conditions: whether the joint attention system is active or inactive. Additionally, auditory and visual distractors are employed to simulate real-world distractions, aiming to test the system’s ability to capture and follow user attention effectively. To assess the influence of the joint attention system, participants completed the Robotic Social Attributes Scale (RoSAS) after each interaction. The results showed a significant improvement in user perception of the robot’s competence and warmth when the joint attention system was active. Full article
(This article belongs to the Special Issue Intelligent Human–Robot Interaction: 3rd Edition)
Show Figures

Figure 1

Figure 1
<p>Overview of the RJAS. The system considers detector stimuli input and applies weights to compute the FoA, which the motion controller uses to reorient the robot and perform verbal expressions.</p>
Full article ">Figure 2
<p>Different frames of one of the users were captured by the detectors through the robot’s camera during the experiment. (<b>a</b>) Face Detector. The user’s face is highlighted in a green bounding box. (<b>b</b>) Head Pose Detector. The user’s gaze direction is represented with a blue arrow. (<b>c</b>) Body Pose Detector: The user’s body landmarks are highlighted with green arrows and red dots. (<b>d</b>) Hand Detector: The hand’s finger landmarks and the current finger status are displayed in the image.</p>
Full article ">Figure 3
<p>BBM controller phases imitating the vestibular system mechanism. (<b>a</b>) The robot starts by moving the eyes toward the FoA if it’s within its field of view. (<b>b</b>) It turns its head toward the direction of the FoA while doing a counter-movement with the eyes. (<b>c</b>) Finally, the robot reorients the body towards the FoA.</p>
Full article ">Figure 4
<p>The social robot Mini.</p>
Full article ">Figure 5
<p>Workflow of the gaming experience. The blue boxes represent the common game stages in the scenario. Distinctive icons represent the different stimuli (audio and visual).</p>
Full article ">Figure 6
<p>Study setup.</p>
Full article ">Figure 7
<p>Bar charts with the average scores and SDs for each of the three dimensions of the user’s perception of the robot measured by the RoSAS questionnaire. Significance levels are indicated: * for <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.05</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>Average scores and SDs for male participants’ perception of the robot. Significance levels are indicated: * for <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.05</mn> </mrow> </semantics></math> and ** for <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.01</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 9
<p>Average scores and SDs for female participants’ perception of the robot.</p>
Full article ">
29 pages, 5444 KiB  
Article
Task Allocation and Sequence Planning for Human–Robot Collaborative Disassembly of End-of-Life Products Using the Bees Algorithm
by Jun Huang, Sheng Yin, Muyao Tan, Quan Liu, Ruiya Li and Duc Pham
Biomimetics 2024, 9(11), 688; https://doi.org/10.3390/biomimetics9110688 - 11 Nov 2024
Viewed by 1229
Abstract
Remanufacturing, which benefits the environment and saves resources, is attracting increasing attention. Disassembly is arguably the most critical step in the remanufacturing of end-of-life (EoL) products. Human–robot collaborative disassembly as a flexible semi-automated approach can increase productivity and relieve people of tedious, laborious, [...] Read more.
Remanufacturing, which benefits the environment and saves resources, is attracting increasing attention. Disassembly is arguably the most critical step in the remanufacturing of end-of-life (EoL) products. Human–robot collaborative disassembly as a flexible semi-automated approach can increase productivity and relieve people of tedious, laborious, and sometimes hazardous jobs. Task allocation in human–robot collaborative disassembly involves methodically assigning disassembly tasks to human operators or robots. However, the schemes for task allocation in recent studies have not been sufficiently refined and the issue of component placement after disassembly has not been fully addressed in recent studies. This paper presents a method of task allocation and sequence planning for human–robot collaborative disassembly of EoL products. The adopted criteria for human–robot disassembly task allocation are introduced. The disassembly of each component includes dismantling and placing. The performance of a disassembly plan is evaluated according to the time, cost, and utility value. A discrete Bees Algorithm using genetic operators is employed to optimise the generated human–robot collaborative disassembly solutions. The proposed task allocation and sequence planning method is validated in two case studies involving an electric motor and a power battery from an EoL vehicle. The results demonstrate the feasibility of the proposed method for planning and optimising human–robot collaborative disassembly solutions. Full article
(This article belongs to the Special Issue Intelligent Human–Robot Interaction: 3rd Edition)
Show Figures

Figure 1

Figure 1
<p>The workflow of the proposed method.</p>
Full article ">Figure 2
<p>The workflow of IDBA.</p>
Full article ">Figure 3
<p>Swap operator in disassembly solution.</p>
Full article ">Figure 4
<p>Insert operator in disassembly solution.</p>
Full article ">Figure 5
<p>Genetic mutation in disassembly solution.</p>
Full article ">Figure 6
<p>Structure and combined cost constituting individual bees.</p>
Full article ">Figure 7
<p>The setting of the forbidden direction.</p>
Full article ">Figure 8
<p>The Gantt chart of HRCD.</p>
Full article ">Figure 9
<p>Photograph and exploded view of an electric motor. (<b>a</b>) Photograph. (<b>b</b>) Exploded view.</p>
Full article ">Figure 10
<p>Iterative diagram (electric motor, balance mode). (<b>a</b>) Minimum combined cost. (<b>b</b>) Iterative scatter plot.</p>
Full article ">Figure 11
<p>The Gantt chart of the optimal disassembly solution of the electric motor.</p>
Full article ">Figure 12
<p>Photograph and exploded view of the power battery. (<b>a</b>) Photograph. (<b>b</b>) Exploded view.</p>
Full article ">Figure 13
<p>Iterative diagram (power battery, balance mode). (<b>a</b>) Minimum combined cost. (<b>b</b>) Iterative scatter plot.</p>
Full article ">Figure 14
<p>Gantt chart of the optimised disassembly solution of the power battery.</p>
Full article ">Figure 15
<p>IDBA performance for different population sizes and iterations. (<b>a</b>) Average running time (electric motor). (<b>b</b>) Minimum combined cost (electric motor). (<b>c</b>) Average running time (power battery). (<b>d</b>) Minimum combined cost (power battery).</p>
Full article ">Figure 16
<p>Performance comparisons of different optimisation algorithms (power battery case study). (<b>a</b>) Average running time for different population sizes. (<b>b</b>) Minimum combined cost for different population sizes. (<b>c</b>) Average running time for different numbers of iterations. (<b>d</b>) Minimum combined cost for different numbers of iterations.</p>
Full article ">
Back to TopTop