[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (370)

Search Parameters:
Keywords = touch interaction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 1207 KiB  
Article
Behavior Coding of Adolescent and Therapy Dog Interactions During a Social Stress Task
by Seana Dowling-Guyer, Katie Dabney, Elizabeth A. R. Robertson and Megan K. Mueller
Vet. Sci. 2024, 11(12), 644; https://doi.org/10.3390/vetsci11120644 - 12 Dec 2024
Viewed by 246
Abstract
Youth mental health interventions incorporating trained therapy animals are increasingly popular, but more research is needed to understand the specific interactive behaviors between participants and therapy dogs. Understanding the role of these interactive behaviors is important for supporting both intervention efficacy and animal [...] Read more.
Youth mental health interventions incorporating trained therapy animals are increasingly popular, but more research is needed to understand the specific interactive behaviors between participants and therapy dogs. Understanding the role of these interactive behaviors is important for supporting both intervention efficacy and animal welfare and well-being. The goal of this study was to develop ethograms to assess interactive behaviors (including both affiliative and stress-related behaviors) of participants and therapy dogs during a social stress task, explore the relationship between human and dog behaviors, and assess how these behaviors may vary between experimental conditions with varying levels of physical contact with the therapy dog. Using video data from a previous experimental study (n = 50 human–therapy dog interactions, n = 25 control group), we successfully developed behavioral ethograms that could be used with a high degree of interrater reliability. Results indicated differences between experimental conditions in dog and human behaviors based on whether participants were interacting with a live or a stuffed dog, and whether they were allowed to touch the dog. These findings suggest that physically interacting with a live dog may be an important feature of these interventions, with participants demonstrating increased positive behaviors such as laughing and smiling in these conditions. Dog behaviors also varied based on whether they were in the touching/petting condition of the study which could indicate reactions to the session and has potential welfare implications for the dogs. Future research should focus on identifying specific patterns of interactive behaviors between dogs and humans that predict anxiolytic outcomes. Full article
Show Figures

Figure 1

Figure 1
<p>Original experimental conditions and study timeline/tasks.</p>
Full article ">Figure 2
<p>Human behaviors by experimental condition (<span class="html-italic">N</span> = 75). Note: * indicates difference of <span class="html-italic">p</span> &lt; .05 between conditions based on mean rank testing, indicated by brackets.</p>
Full article ">Figure 3
<p>Dog behaviors by experimental condition (<span class="html-italic">n</span> = 50). Note: * indicates difference of <span class="html-italic">p</span> &lt; .05 between conditions based on mean rank testing.</p>
Full article ">
33 pages, 20672 KiB  
Review
Beyond Human Touch: Integrating Soft Robotics with Environmental Interaction for Advanced Applications
by Narges Ghobadi, Nariman Sepehri, Witold Kinsner and Tony Szturm
Actuators 2024, 13(12), 507; https://doi.org/10.3390/act13120507 - 8 Dec 2024
Viewed by 873
Abstract
Soft robotics is an emerging field dedicated to the design and development of robots with soft structures. Soft robots offer unique capabilities in terms of flexibility, adaptability, and safety of physical interaction, and therefore provide advanced collaboration between humans and robots. The further [...] Read more.
Soft robotics is an emerging field dedicated to the design and development of robots with soft structures. Soft robots offer unique capabilities in terms of flexibility, adaptability, and safety of physical interaction, and therefore provide advanced collaboration between humans and robots. The further incorporation of soft actuators, advanced sensing technologies, user-friendly control interfaces, and safety considerations enhance the interaction experience. Applications in healthcare, specifically in rehabilitation and assistive devices, as well as manufacturing, show how soft robotics has revolutionized human–robot collaboration and improved quality of life. Soft robotics can create new opportunities to enhance human well-being and increase efficiency in human–robot interactions. Nevertheless, challenges persist, and future work must focus on overcoming technological barriers while increasing reliability, refining control methodologies, and enhancing user experience and acceptance. This paper reviews soft robotics and outlines its advantages in scenarios involving human–robot interaction. Full article
(This article belongs to the Section Actuators for Robotics)
Show Figures

Figure 1

Figure 1
<p>Publication counts per year for soft robotics and human–soft robot interaction. Graph generated using data from [<a href="#B13-actuators-13-00507" class="html-bibr">13</a>].</p>
Full article ">Figure 2
<p>Soft fluid-driven actuators: (<b>a</b>) earthworms-inspired soft pneumatic actuator (reproduced from Yang et al., IEEE Access, 2020, CC BY 4.0.) [<a href="#B32-actuators-13-00507" class="html-bibr">32</a>]; (<b>b</b>) hydraulic artificial muscles (reproduced from Feng et al., Robomech Journal, 2021, CC BY 4.0.) [<a href="#B33-actuators-13-00507" class="html-bibr">33</a>]; (<b>c</b>) continuous-flow fluidic actuator with integrated magnetorheological fluid valves (reproduced from McDonald et al., Adv. Intell. Syst., 2020, CC BY 4.0.) [<a href="#B31-actuators-13-00507" class="html-bibr">31</a>].</p>
Full article ">Figure 3
<p>Soft voltage-driven actuators: (<b>a</b>) soft modular continuum arm using tendon-driven actuation (reproduced from Mishra et al., Front. Robot. AI, 2017, CC BY 4.0.) [<a href="#B11-actuators-13-00507" class="html-bibr">11</a>]; (<b>b</b>) soft tunable lenses based on zipping electroactive polymers (reproduced from Hartmann et al., Adv. Sci., 2021, CC BY 4.0.) [<a href="#B43-actuators-13-00507" class="html-bibr">43</a>]; (<b>c</b>) soft biomimetic robotic fish based on dielectric elastomer actuators (reproduced from Shintake et al., Soft Robotics, 2018, CC BY 4.0.) [<a href="#B65-actuators-13-00507" class="html-bibr">65</a>]; (<b>d</b>) flexible shape-memory alloy actuator (reproduced with permission from Villoslada et al., Robotics and Autonomous Systems, 2015. © 2015 Elsevier) [<a href="#B51-actuators-13-00507" class="html-bibr">51</a>]; (<b>e</b>) soft crawling robot with electrothermal actuation (reproduced from Wu et al., Sci. Adv., 2023, CC BY 4.0.) [<a href="#B50-actuators-13-00507" class="html-bibr">50</a>].</p>
Full article ">Figure 4
<p>Injection molding used to fabricate soft lens [<a href="#B43-actuators-13-00507" class="html-bibr">43</a>].</p>
Full article ">Figure 5
<p>Haptic feedback with soft robots: (<b>a</b>) multi-fingered palpation using pneumatic haptic feedback actuators for tumor detection (reproduced with permission from Li et al., Sensors and Actuators A: Physical, 2014. © 2014 Elsevier) [<a href="#B165-actuators-13-00507" class="html-bibr">165</a>]; (<b>b</b>) soft haptic variable stiffness interface for rehabilitation (reproduced from Sebastian et al., Front. Robot. AI, 2017, CC BY 4.0.) [<a href="#B166-actuators-13-00507" class="html-bibr">166</a>]; (<b>c</b>) soft pneumatic actuator (SPA)–skin interface to recreate the roughness, shape, and size of a perceived object (reproduced from Sonar et al., Adv. Intell. Syst., 2021, CC BY 4.0.) [<a href="#B169-actuators-13-00507" class="html-bibr">169</a>]; (<b>d</b>) soft glove with force feedback for rehabilitation in virtual reality (reproduced from Li et al., Biomimetics, 2023, CC BY 4.0.) [<a href="#B170-actuators-13-00507" class="html-bibr">170</a>]; (<b>e</b>) tendon-driven haptic glove to increase realism and perceived acuity of contact force (reproduced from Baik et al., Front. Bioeng. Biotechnol., 2020, CC BY 4.0.) [<a href="#B160-actuators-13-00507" class="html-bibr">160</a>].</p>
Full article ">Figure 6
<p>Control interfaces for human interaction with soft robots: (<b>a</b>) four-degree-of-freedom haptic joystick for teleoperation of continuum robots (reproduced from Xie et al., Robotics, 2023, CC BY 4.0.) [<a href="#B179-actuators-13-00507" class="html-bibr">179</a>]; (<b>b</b>) soft robotic artificial muscle controlled via hand gesture signals using leap motion sensor (reproduced from Oguntosin and Akindele, ASTESJ, 2020, CC BY-SA 4.0.) [<a href="#B181-actuators-13-00507" class="html-bibr">181</a>]; (<b>c</b>) brain–computer interface to control a soft robot hand (reproduced from Zhang et al., Front. Neurorobot., 2019, CC BY 4.0.) [<a href="#B151-actuators-13-00507" class="html-bibr">151</a>]; (<b>d</b>) tendon-driven exoskeleton glove operated using a smartphone app (reproduced from Gerez et al., IEEE Access, 2020, CC BY 4.0.) [<a href="#B92-actuators-13-00507" class="html-bibr">92</a>].</p>
Full article ">Figure 7
<p>Applications of soft robots in healthcare: (<b>a</b>) soft multi-module manipulator for minimally invasive surgery (reproduced from Runciman et al., Soft Robotics, 2019, CC BY 4.0.) [<a href="#B207-actuators-13-00507" class="html-bibr">207</a>]; (<b>b</b>) soft robotic exosuit for dorsiflexion and plantarflexion assistance (reproduced from Porciuncula et al., Front. Neurorobot., 2021, CC BY 4.0.) [<a href="#B209-actuators-13-00507" class="html-bibr">209</a>]; (<b>c</b>) soft robotic exosuit for post-stroke hemiparetic walking (reproduced from Awad et al., IEEE Open J. Eng. Med. Biol., 2020, CC BY 4.0.) [<a href="#B5-actuators-13-00507" class="html-bibr">5</a>]; (<b>d</b>) tendon-driven exoskeleton glove for hand rehabilitation (reproduced from Gerez et al., IEEE Access, 2020, CC BY 4.0.) [<a href="#B92-actuators-13-00507" class="html-bibr">92</a>].</p>
Full article ">Figure 8
<p>Applications of soft robots in exploration: (<b>a</b>) underwater snake robot to perform underwater operations (reproduced from Zhang et al., J. Mar. Sci. Eng., 2022, CC BY 4.0.) [<a href="#B8-actuators-13-00507" class="html-bibr">8</a>]; (<b>b</b>) untethered soft robot capable of stable locomotion (reproduced with permission from Cao et al., Extreme Mechanics Letters, 2018. © 2018 Elsevier) [<a href="#B214-actuators-13-00507" class="html-bibr">214</a>].</p>
Full article ">Figure 9
<p>Applications of soft robots in manufacturing: (<b>a</b>) back-support exoskeleton with flexible beams (reproduced from Näf et al., Front. Robot. AI, 2018, CC BY 4.0.) [<a href="#B90-actuators-13-00507" class="html-bibr">90</a>]; (<b>b</b>) soft robotic gripper for grasping various objects (reproduced from Batsuren and Yun, Appl. Sci., 2019, CC BY 4.0.) [<a href="#B216-actuators-13-00507" class="html-bibr">216</a>].</p>
Full article ">Figure 10
<p>Compliant soft robot, CASTOR, for therapy with children with autism spectrum disorder (reproduced from Casas-Bocanegra et al., Actuators, 2020, CC BY 4.0.) [<a href="#B218-actuators-13-00507" class="html-bibr">218</a>]: (<b>a</b>) physical interactions exhibited by the participants; (<b>b</b>) emotions performed by the robot’s face; (<b>c</b>) functionalities including greeting, pointing to the head, pointing to the eyes, and dancing for potential use in therapy scenarios.</p>
Full article ">
20 pages, 21236 KiB  
Article
Study on the Influence Mechanism of Soil Covering and Compaction Process on Maize Sowing Uniformity Based on DEM–MBD Coupling
by Kuo Sun, Chenglin He, Qing Zhou, Xinnan Yu, Qiu Dong, Wenjun Wang, Yulong Chen, Mingwei Li, Xiaomeng Xia, Yang Wang and Long Zhou
Agronomy 2024, 14(12), 2883; https://doi.org/10.3390/agronomy14122883 - 3 Dec 2024
Viewed by 639
Abstract
In the production process of maize, the uniformity of maize sowing is one of the main factors affecting maize yield. The effect of soil coverage and the compaction process on sowing uniformity, as the final link in determining the seed bed position, needs [...] Read more.
In the production process of maize, the uniformity of maize sowing is one of the main factors affecting maize yield. The effect of soil coverage and the compaction process on sowing uniformity, as the final link in determining the seed bed position, needs to be further investigated. In this paper, the parameters between soil particles and boundaries are calibrated using the Plackett–Burman test and the central composite design. Furthermore, based on the DEM–MBD coupling, the influence of soil coverage and the compaction process on the seed position of the seeding monomer at different forward speeds are analysed. It was found that the adhesion between the soil and the soil-touching component can have a significant effect on the contact process between the component and the soil. Therefore, the EEPA model was used to analyse the soil–component interaction process and the contact parameters between the soil and components were obtained for the calibration. Further, based on the above work, it was found that before and after mulching, the displacement of seed particles of all shapes in the longitudinal direction increased significantly with the increase in the advancement speed of the sowing unit, while the displacement of seed particles in the transverse and sowing depth directions decreased with the increase in the advancement speed of the unit. In addition, before and after suppression, as the forward speed of the sowing unit increased, the displacement of seed particles of all shapes in the longitudinal and transverse directions gradually increased, and the displacement of seed particles of all shapes in the direction of the sowing depth decreased; the disturbance of seed displacement by the mulch suppression process was not related to seed shape. As the operating speed of the seeding unit increased, the mulching compaction process significantly reduced the sowing uniformity of maize seeds. This paper provides a theoretical basis for the next step in optimising the structure and working process of the soil coverage and the compaction. Full article
(This article belongs to the Topic Advances in Crop Simulation Modelling)
Show Figures

Figure 1

Figure 1
<p>Analysing the adhesion of touchdown components to the soil using the TA.XTC-18 texture analyser.</p>
Full article ">Figure 2
<p>Sketch of the device for the inclined slip test.</p>
Full article ">Figure 3
<p>Simulation operation for the ramp slip test of the component with soil particles: (<b>a</b>) before rotation and (<b>b</b>) after rotation.</p>
Full article ">Figure 4
<p>Measuring devices for the angle of repose: (<b>a</b>) experimental set-up and (<b>b</b>) device simulation.</p>
Full article ">Figure 5
<p>Core-share structure.</p>
Full article ">Figure 6
<p>Spoon-wheel seeding apparatus.</p>
Full article ">Figure 7
<p>Three-dimensional model of the seeding monomer.</p>
Full article ">Figure 8
<p>Analytical model for the seeding monomer in RecurDyn.</p>
Full article ">Figure 9
<p>Soil slots generated in EDEM software.</p>
Full article ">Figure 10
<p>Simulation interface of the two software when the simulation time is 0.15 s.</p>
Full article ">Figure 11
<p>Simulation interface of the two software when the simulation time is 1.5 s.</p>
Full article ">Figure 12
<p>Simulation interface of the two software when the simulation time is 3 s.</p>
Full article ">Figure 13
<p>Calculation of the variation in the location of the seed in the longitudinal, transverse, and sowing depth directions before and after covering with soil.</p>
Full article ">Figure 14
<p>Calculation of change in seed position in longitudinal, transverse, and sowing depth directions before and after suppression.</p>
Full article ">Figure 15
<p>Distribution of maize seeds at a seeding monomer forward speed of 0.75 m/s.</p>
Full article ">Figure 16
<p>Changes in seed position at a seeding unit forward speed of 1.11 m/s.</p>
Full article ">Figure 17
<p>Distribution of maize seeds at a seeding unit forward speed of 1.47 m/s.</p>
Full article ">Figure 18
<p>Variation curves of normal contact force versus displacement between soil and component at different water contents: (<b>a</b>) 15%, (<b>b</b>) 20%, and (<b>c</b>) 25%.</p>
Full article ">Figure 19
<p>Response surface plots of the interaction of the factors on the effect of slip angle.</p>
Full article ">Figure 20
<p>Illustration of the angle of repose.</p>
Full article ">Figure 21
<p>Shape of seed furrows formed by seeding unit at three speeds.</p>
Full article ">Figure 22
<p>Contours of the shape of the seed channel at three speeds of the seeding monomer.</p>
Full article ">Figure 23
<p>Results of the slopes of the seed channel at different forward speeds.</p>
Full article ">Figure 24
<p>Seed position offset for each shape before and after coverage at different forward speeds.</p>
Full article ">Figure 25
<p>Seed position offset for each shape before and after compaction at different forward speeds.</p>
Full article ">Figure 26
<p>Uniformity of sowing at different forward speeds of the seeding monomer.</p>
Full article ">
18 pages, 2775 KiB  
Article
To Touch or Not to Touch: Navigating the Ethical and Monetary Dilemma in Giant Panda Tourism
by Yulei Guo and David Fennell
Tour. Hosp. 2024, 5(4), 1309-1326; https://doi.org/10.3390/tourhosp5040073 - 2 Dec 2024
Viewed by 391
Abstract
Tourists consistently demonstrate the need to touch wildlife, although policies often deny these experiences because of the psychological and physiological impacts on animals. However, philosophers contend that humans can learn to empathize with animals by feeling their way into the plight of animals [...] Read more.
Tourists consistently demonstrate the need to touch wildlife, although policies often deny these experiences because of the psychological and physiological impacts on animals. However, philosophers contend that humans can learn to empathize with animals by feeling their way into the plight of animals through touch. Facing this dilemma, the paper asks if human touch can be ethically experienced in tourist interactions with animals by employing animal health warning labels. Using the case of “holding a panda” at the Chengdu Research Base of Giant Panda Breeding, Sichuan, China, the study investigates this dilemma through Johann Gottfried Herder’s philosophy on empathy and touch against the no-touch policies. A survey containing four scenarios shows that the use of payment can serve as a more effective tool than ethical appeal in reducing people’s decision to hold a panda through its inclusion of additional factors in the decision process. However, ethical touch building on animal health warning labels demands spaces for mutual respect, conservation awareness, and the recognition of health risks through a direct confrontation of the established emotional and sensual aesthetic appeal of cuteness between visitors and the panda. It is found that a combined use of payment and ethical appeal is necessary to restructure visitors’ willingness to hold a panda. Full article
Show Figures

Figure 1

Figure 1
<p>Ethical touch: Giant panda health warning labels.</p>
Full article ">Figure 2
<p>Four scenarios and the surveying process.</p>
Full article ">Figure 3
<p>The research team and the survey stall.</p>
Full article ">Figure 4
<p>Distribution of total scores from the questionnaire part 2.</p>
Full article ">Figure 5
<p>CHAID decision tree and bar chart for “Not hold” reasons in Scenario 1.</p>
Full article ">Figure 6
<p>CHAID decision tree for Scenario 2.</p>
Full article ">Figure 7
<p>CHAID decision tree and bar chart for “Not hold” reasons in Scenario 3.</p>
Full article ">
21 pages, 3625 KiB  
Article
Multimodal Material Classification Using Visual Attention
by Mohadeseh Maleki, Ghazal Rouhafzay and Ana-Maria Cretu
Sensors 2024, 24(23), 7664; https://doi.org/10.3390/s24237664 - 29 Nov 2024
Viewed by 486
Abstract
The material of an object is an inherent property that can be perceived through various sensory modalities, yet the integration of multisensory information substantially improves the accuracy of these perceptions. For example, differentiating between a ceramic and a plastic cup with similar visual [...] Read more.
The material of an object is an inherent property that can be perceived through various sensory modalities, yet the integration of multisensory information substantially improves the accuracy of these perceptions. For example, differentiating between a ceramic and a plastic cup with similar visual properties may be difficult when relying solely on visual cues. However, the integration of touch and audio feedback when interacting with these objects can significantly clarify these distinctions. Similarly, combining audio and touch exploration with visual guidance can optimize the sensory examination process. In this study, we introduce a multisensory approach for categorizing object materials by integrating visual, audio, and touch perceptions. The main contribution of this paper is the exploration of a computational model of visual attention that directs the sampling of touch and audio data. We conducted experiments using a subset of 63 household objects from a publicly available dataset, the ObjectFolder dataset. Our findings indicate that incorporating a visual attention model enhances the ability to generalize material classifications to new objects and achieves superior performance compared to a baseline approach, where data are gathered through random interactions with an object’s surface. Full article
Show Figures

Figure 1

Figure 1
<p>Object material recognition framework.</p>
Full article ">Figure 2
<p>Confusion matrices based on visual data modality (<b>a</b>) for random sampling and (<b>b</b>) for visual attention sampling.</p>
Full article ">Figure 3
<p>Confusion matrices based on touch modality (<b>a</b>) for random sampling and (<b>b</b>) for visual attention sampling.</p>
Full article ">Figure 4
<p>Confusion matrices based on audio modality (<b>a</b>) for random sampling and (<b>b</b>) for visual attention sampling.</p>
Full article ">Figure 5
<p>The learning curve showing (<b>a</b>) the loss and (<b>b</b>) the accuracy for visual data with random sampling.</p>
Full article ">Figure 6
<p>The learning curve showing (<b>a</b>) the loss and (<b>b</b>) the accuracy for visual data with visual attention sampling.</p>
Full article ">Figure 7
<p>The learning curve showing (<b>a</b>) the loss and (<b>b</b>) the accuracy for touch data with visual attention sampling.</p>
Full article ">Figure 8
<p>The learning curve showing (<b>a</b>) the loss and (<b>b</b>) the accuracy for audio data with visual attention sampling.</p>
Full article ">Figure 9
<p>The learning curve showing (<b>a</b>) the loss and (<b>b</b>) the accuracy for touch data with random sampling.</p>
Full article ">Figure 10
<p>The learning curve showing (<b>a</b>) the loss and (<b>b</b>) the accuracy for audio data with random sampling.</p>
Full article ">
23 pages, 41330 KiB  
Article
Free-Hand Input and Interaction in Virtual Reality Using a Custom Force-Based Digital Thimble
by Tafadzwa Joseph Dube and Ahmed Sabbir Arif
Appl. Sci. 2024, 14(23), 11018; https://doi.org/10.3390/app142311018 - 27 Nov 2024
Viewed by 547
Abstract
This article presents the Digital Thimble, an index-finger-wearable device designed for free-hand interactions in virtual reality (VR) by varying the touch contact force on a surface. It contains an optical mouse sensor for tracking and a pressure sensor for detecting contact force. A [...] Read more.
This article presents the Digital Thimble, an index-finger-wearable device designed for free-hand interactions in virtual reality (VR) by varying the touch contact force on a surface. It contains an optical mouse sensor for tracking and a pressure sensor for detecting contact force. A Fitts’ law study compared the Digital Thimble with a commercial finger mouse and a VR controller using both on-press and on-release selection methods. The results showed that the finger mouse provided higher throughput (3.11 bps) and faster speed (1258 ms) compared to the VR controller (2.89 bps; 1327 ms) and the Digital Thimble (2.61 bps; 1487 ms). Further evaluation in sorting and teleportation tasks demonstrated that the Digital Thimble delivered better accuracy and precision. Participants favored the Digital Thimble for its comfort and convenience, highlighting its potential as a user-friendly VR input device. Full article
(This article belongs to the Special Issue Human–Computer Interaction and Virtual Environments)
Show Figures

Figure 1

Figure 1
<p>Different components of the Digital Thimble: (<b>a</b>) The Unique Station Mini Wireless finger mouse from which the optical mouse sensor was sourced. (<b>b</b>) The disassembled finger mouse, showing the circuit and the optical mouse’s sensor. (<b>c</b>) The Digital Thimble components, including the pressure sensor, optical mouse sensor, and 3D-printed case that houses the circuitry.</p>
Full article ">Figure 2
<p>The devices used in the evaluation: (<b>a</b>) An Oculus Touch Controller. (<b>b</b>) An AOKID Creative Finger Mouse. (<b>c</b>) The Digital Thimble.</p>
Full article ">Figure 3
<p>The 2D Fitts’ law task in ISO 9241-9 [<a href="#B63-applsci-14-11018" class="html-bibr">63</a>]. The target is indicated in red. Arrows and numbers illustrate the selection sequence.</p>
Full article ">Figure 4
<p>Three participants performing Fitts’ law tasks in the first user study using the (<b>a</b>) controller, (<b>b</b>) finger mouse, and (<b>c</b>) Digital Thimble.</p>
Full article ">Figure 5
<p>Average throughput (bps) by input device and selection method. Error bars represent ±1 standard deviation.</p>
Full article ">Figure 6
<p>Average movement time (ms) by input device and selection method. Error bars represent ±1 standard deviation.</p>
Full article ">Figure 7
<p>Average target re-entries (count/trial) by input device and selection method. Error bars represent ±1 standard deviation.</p>
Full article ">Figure 8
<p>Cursor trace examples for on-press selection method using (<b>a</b>) controller, (<b>b</b>) finger mouse, and (<b>c</b>) Digital Thimble.</p>
Full article ">Figure 9
<p>Cursor trace examples for on-release selection method using (<b>a</b>) controller, (<b>b</b>) finger mouse, and (<b>c</b>) Digital Thimble.</p>
Full article ">Figure 10
<p>Average error rate (%) by input device and selection method. Error bars represent ±1 standard deviation.</p>
Full article ">Figure 11
<p>The median perceived workload across user study conditions measured by a 20-point NASA-TLX questionnaire. The scale from 1 to 20 represents <span class="html-italic">very low</span> to <span class="html-italic">very high</span> ratings for all factors except performance, where 1 to 20 represents the range from <span class="html-italic">perfect</span> to <span class="html-italic">failure</span>. The error bars represent ±1 standard deviation. Red asterisks denote statistically significant differences.</p>
Full article ">Figure 12
<p>The median perceived usability of the study conditions, rated on a 5-point Likert scale (1 = strongly disagree; 5 = strongly agree). Error bars represent ±1 standard deviation. Red asterisks denote statistically significant differences.</p>
Full article ">Figure 13
<p>(<b>a</b>) Bird’s-eye view of the teleportation destinations, with red arrows indicating the designated path. The green target marks the starting point. (<b>b</b>) An animated cylindrical target with a cube displaying the target’s number.</p>
Full article ">Figure 14
<p>Participants using the three devices to teleport: (<b>a</b>) Controller. (<b>b</b>) Finger mouse. (<b>c</b>) Digital Thimble.</p>
Full article ">Figure 15
<p>The sorting scene featuring four numbered cubes on a table.</p>
Full article ">Figure 16
<p>Participants using the three devices to sort cubes: (<b>a</b>) Controller. (<b>b</b>) Finger mouse. (<b>c</b>) Digital Thimble.</p>
Full article ">Figure 17
<p>Average task completion time (milliseconds) categorized by tasks and input device. Error bars represent ±1 standard deviation.</p>
Full article ">Figure 18
<p>Average accuracy rate (%) categorized by task and input device. Error bars represent ±1 standard deviation.</p>
Full article ">Figure 19
<p>The median perceived workload across the study conditions, measured by a 20-point NASA-TLX questionnaire. The scale from 1 to 20 represents <span class="html-italic">very low</span> to <span class="html-italic">very high</span> ratings for all factors except performance, where 1 to 20 represents the range from <span class="html-italic">perfect</span> to <span class="html-italic">failure</span>. Error bars represent ±1 standard deviation. Red asterisks denote statistically significant differences.</p>
Full article ">Figure 20
<p>The median perceived usability of the study conditions, rated on a 5-point Likert scale (1 = strongly disagree; 5 = strongly agree). Error bars represent ±1 standard deviation.</p>
Full article ">
19 pages, 12690 KiB  
Article
TouchView: Mid-Air Touch on Zoomable 2D View for Distant Freehand Selection on a Virtual Reality User Interface
by Woojoo Kim and Shuping Xiong
Sensors 2024, 24(22), 7202; https://doi.org/10.3390/s24227202 - 11 Nov 2024
Viewed by 578
Abstract
Selection is a fundamental interaction element in virtual reality (VR) and 3D user interfaces (UIs). Raycasting, one of the most common object selection techniques, is known to have difficulties in selecting small or distant objects. Meanwhile, recent advancements in computer vision technology have [...] Read more.
Selection is a fundamental interaction element in virtual reality (VR) and 3D user interfaces (UIs). Raycasting, one of the most common object selection techniques, is known to have difficulties in selecting small or distant objects. Meanwhile, recent advancements in computer vision technology have enabled seamless vision-based hand tracking in consumer VR headsets, enhancing accessibility to freehand mid-air interaction and highlighting the need for further research in this area. This study proposes a new technique called TouchView, which utilizes a virtual panel with a modern adaptation of the Through-the-Lens metaphor to improve freehand selection for VR UIs. TouchView enables faster and less demanding target selection by allowing direct touch interaction with the magnified object proxies reflected on the panel view. A repeated-measures ANOVA on the results of a follow-up experiment on multitarget selection with 23 participants showed that TouchView outperformed the current market-dominating freehand raycasting technique, Hybrid Ray, in terms of task performance, perceived workload, and preference. User behavior was also analyzed to understand the underlying reasons for these improvements. The proposed technique can be used in VR UI applications to enhance the selection of distant objects, especially for cases with frequent view shifts. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>The working mechanism of TouchView: (<b>a</b>) main components, (<b>b</b>) zooming in and out the view, and (<b>c</b>) schematic explanation of the mechanism. Note: The components marked with dotted lines were invisible. A video demonstration of Hybrid Ray and TouchView performing the experimental task can be seen at the following link: <a href="https://vimeo.com/1019796139" target="_blank">https://vimeo.com/1019796139</a> (accessed on 31 October 2024).</p>
Full article ">Figure 2
<p>Experimental setup.</p>
Full article ">Figure 3
<p>(<b>a</b>) Two selection techniques and (<b>b</b>) three target sizes of visual angle (diameter) used in this study. Note: A video demonstration of Hybrid Ray and TouchView performing the experimental task can be seen at the following link: <a href="https://vimeo.com/1019796139" target="_blank">https://vimeo.com/1019796139</a> (accessed on 31 October 2024).</p>
Full article ">Figure 4
<p>Target placement in the multitarget selection task. Note: VA indicates visual angle.</p>
Full article ">Figure 5
<p>Boxplot of (<b>a</b>) task completion time and (<b>b</b>) miss rate by technique and target size. Note: The cross mark (×) indicates the mean, and the black circle mark (●) indicates values out of the interquartile range. The asterisk mark (****) indicates the significance of the post hoc analysis with a Bonferroni adjustment ( <span class="html-italic">p</span> &lt; 0.0001). The same note applies to all other boxplots in the remaining text.</p>
Full article ">Figure 6
<p>Boxplot of raw and weighted NASA-TLX ratings by technique and target size. Note: The cross mark (×) indicates the mean, and the black circle mark (●) indicates values out of the interquartile range. The asterisk mark (*) indicates the significance of the post hoc analysis with a Bonferroni adjustment (* <span class="html-italic">p</span> &lt; 0.05, ** <span class="html-italic">p</span> &lt; 0.01, *** <span class="html-italic">p</span> &lt; 0.001, **** <span class="html-italic">p</span> &lt; 0.0001).</p>
Full article ">Figure 7
<p>Boxplot of movement of (<b>a</b>) dominant hand, (<b>b</b>) nondominant hand, and (<b>c</b>) head by technique and target size. Note: The cross mark (×) indicates the mean, and the black circle mark (●) indicates values out of the interquartile range. The asterisk mark (*) indicates the significance of the post hoc analysis with a Bonferroni adjustment (* <span class="html-italic">p</span> &lt; 0.05, *** <span class="html-italic">p</span> &lt; 0.001, **** <span class="html-italic">p</span> &lt; 0.0001).</p>
Full article ">Figure 8
<p>(<b>a</b>) Percentage of trials using only the dominant hand and (<b>b</b>) mean number of correct and incorrect selections (hit and miss) made by the dominant and nondominant hand.</p>
Full article ">Figure 9
<p>Boxplot of the target visual angle for Hybrid Ray and TouchView. Note: TouchView (Adj) indicates the target visual angle after adjustment by zooming in or out. The cross mark (×) indicates the mean, and the black circle mark (●) indicates values out of the interquartile range. The asterisk mark (****) indicates the significance of the post hoc analysis with a Bonferroni adjustment (<span class="html-italic">p</span> &lt; 0.0001).</p>
Full article ">Figure 10
<p>Frequency and percentage of the preferred technique by target size.</p>
Full article ">
13 pages, 1449 KiB  
Article
Evaluating the User Experience and Usability of the MINI Robot for Elderly Adults with Mild Dementia and Mild Cognitive Impairment: Insights and Recommendations
by Aysan Mahmoudi Asl, Jose Miguel Toribio-Guzmán, Álvaro Castro-González, María Malfaz, Miguel A. Salichs and Manuel Franco Martín
Sensors 2024, 24(22), 7180; https://doi.org/10.3390/s24227180 - 8 Nov 2024
Viewed by 684
Abstract
Introduction: In recent years, the integration of robotic systems into various aspects of daily life has become increasingly common. As these technologies continue to advance, ensuring user-friendly interfaces and seamless interactions becomes more essential. For social robots to genuinely provide lasting value [...] Read more.
Introduction: In recent years, the integration of robotic systems into various aspects of daily life has become increasingly common. As these technologies continue to advance, ensuring user-friendly interfaces and seamless interactions becomes more essential. For social robots to genuinely provide lasting value to humans, a favourable user experience (UX) emerges as an essential prerequisite. This article aimed to evaluate the usability of the MINI robot, highlighting its strengths and areas for improvement based on user feedback and performance. Materials and Methods: In a controlled lab setting, a mixed-method qualitative study was conducted with ten individuals aged 65 and above diagnosed with mild dementia (MD) and mild cognitive impairment (MCI). Participants engaged in individual MINI robot interaction sessions, completing cognitive tasks as per written instructions. Video and audio recordings documented interactions, while post-session System Usability Scale (SUS) questionnaires quantified usability perception. Ethical guidelines were followed, ensuring informed consent, and the data underwent qualitative and quantitative analyses, contributing insights into the MINI robot’s usability for this demographic. Results: The study addresses the ongoing challenges that tasks present, especially for MD individuals, emphasizing the importance of user support. Most tasks require both verbal and physical interactions, indicating that MD individuals face challenges when switching response methods within subtasks. These complexities originate from the selection and use of response methods, including difficulties with voice recognition, tablet touch, and tactile sensors. These challenges persist across tasks, with individuals with MD struggling to comprehend task instructions and provide correct answers and individuals with MCI struggling to use response devices, often due to the limitations of the robot’s speech recognition. Technical shortcomings have been identified. The results of the SUS indicate positive perceptions, although there are lower ratings for instructor assistance and pre-use learning. The average SUS score of 68.3 places device usability in the “good” category. Conclusions: Our study examines the usability of the MINI robot, revealing strengths in quick learning, simple system and operation, and integration of features, while also highlighting areas for improvement. Careful design and modifications are essential for meaningful engagement with people with dementia. The robot could better benefit people with MD and MCI if clear, detailed instructions and instructor assistance were available. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

Figure 1
<p>MINI robot components.</p>
Full article ">Figure 2
<p>MINI robot apps categorized: blue for general categories, purple for entertainment types, and orange for specific activities within each.</p>
Full article ">Figure 3
<p>SUS score for each Item.</p>
Full article ">
17 pages, 4004 KiB  
Article
Designing a Tactile Document UI for 2D Refreshable Tactile Displays: Towards Accessible Document Layouts for Blind People
by Sara Alzalabny, Omar Moured, Karin Müller, Thorsten Schwarz, Bastian Rapp and Rainer Stiefelhagen
Multimodal Technol. Interact. 2024, 8(11), 102; https://doi.org/10.3390/mti8110102 - 8 Nov 2024
Viewed by 703
Abstract
Understanding document layouts is vital for enhancing document exploration and information retrieval for sighted individuals. However, for blind and visually impaired people, it becomes challenging to have access to layout information using typical assistive technologies such as screen readers. In this paper, we [...] Read more.
Understanding document layouts is vital for enhancing document exploration and information retrieval for sighted individuals. However, for blind and visually impaired people, it becomes challenging to have access to layout information using typical assistive technologies such as screen readers. In this paper, we examine the potential benefits of presenting documents on two-dimensional (2D) refreshable tactile displays. These displays enable the tactile perception of 2D data, offering the advantage of dynamic and interactive functionality. Despite their potential, the development of user interfaces (UIs) for such displays has not advanced significantly. Thus, we propose a design of an intelligent tactile user interface (TUI), incorporating touch and audio feedback to represent documents in a tactile format. Our exploratory study for evaluating this approach revealed satisfaction from participants with the experience of directly viewing documents in their true form, rather than relying on screen-reading interpretations. Additionally, participants offered recommendations for incorporating additional features and refining the approach in future iterations. To facilitate further research and development, we have made our dataset and models publicly available. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The pipeline of our tactile document system consisting of (<b>a</b>) the layout extraction module, which utilizes the YOLOv10 detection model and an OCR model, as well as ChatGPT to extract metadata from each predicted bounding box, and (<b>b</b>) the tactile representation module, which responsible for representing the document’s metadata in a tactile format. This module handles touch and button interactions, and provides audio feedback for the auditory representation of text elements.</p>
Full article ">Figure 2
<p>An example of a document at the various stages of the system pipeline: (<b>a</b>) Following document segmentation, bounding boxes are generated for each element, with ids assigned to each bounding box. (<b>b</b>) A JSON file is created containing the bounding box coordinates, reading order, and OCR-generated text. (<b>c</b>) The tactile representation of the document using our interface. In the first view mode, the interface displays bounding boxes, while the second view mode shows only Braille letters and element identifiers to represent the document elements.</p>
Full article ">Figure 3
<p>The different views available in the tactile document interface, based on the VISM principles. (<b>a</b>) Element identifier overview mode. (<b>b</b>) Bounding boxes overview mode. (<b>c</b>) Selection of an element to explore through navigation buttons or touch. (<b>d</b>) Zoom and filter view. (<b>e</b>) Details-on-demand view.</p>
Full article ">Figure 4
<p>The interactions available in the tactile document interface and the corresponding buttons used on the HyperBraille display. (<b>a</b>) Navigation controls, (<b>b</b>) audio feedback, (<b>c</b>) view mode button, (<b>d</b>) help buttons, (<b>e</b>) page number, (<b>f</b>) back button, (<b>g</b>) document element with a selection box, (<b>h</b>) file name footer, and (<b>i</b>) page navigation button.</p>
Full article ">Figure 5
<p>(<b>a</b>) A document represented in the overview mode using the proposed interface. (<b>b</b>) The corresponding auditory information given to the user after clicking the help button.</p>
Full article ">
21 pages, 13668 KiB  
Article
The Content-Specific Display: Between Medium and Metaphor
by Lukas Van Campenhout, Elke Mestdagh and Kristof Vaes
Designs 2024, 8(6), 109; https://doi.org/10.3390/designs8060109 - 25 Oct 2024
Viewed by 666
Abstract
This paper examines the current generation of displays, as found primarily in smartphones, laptops and tablet computers, from an interaction design perspective. Today’s displays are multifunctional, versatile devices with a standardized, rectangular shape and a standardized interaction. We distinguish two pitfalls. First, they [...] Read more.
This paper examines the current generation of displays, as found primarily in smartphones, laptops and tablet computers, from an interaction design perspective. Today’s displays are multifunctional, versatile devices with a standardized, rectangular shape and a standardized interaction. We distinguish two pitfalls. First, they facilitate an interaction that is isolated and detached from the physical environment in which they are used. Second, their multi-touch interface provides users with few tangible clues and handles. From our background in embodied interaction, we establish an alternate, hypothetical vision of displays: the content-specific display. The content-specific display is defined as a display designed for one specific function and one type of on-screen content. We explore this concept in three student projects from the First Year Master’s program at Product Development, University of Antwerp, and present two key themes that emerge from it: causality and transformation. Both themes reside in the field of coupling, a concept well-known within the field of embodied interaction, and aim at a more seamless integration of on-screen content within the physical world. Finally, we discuss how the content-specific display influences the design process of digital products, and how it fosters collaboration between product designers and designers of graphical user interfaces. Full article
(This article belongs to the Section Smart Manufacturing System Design)
Show Figures

Figure 1

Figure 1
<p>Drawing on a pen display.</p>
Full article ">Figure 2
<p>Balans in neutral mode.</p>
Full article ">Figure 3
<p>(<b>a</b>) Chiara pulls out the slider, Balans goes to record mode. (<b>b</b>) Chiara records a message. (<b>c</b>) Chiara pushes the slider back in.</p>
Full article ">Figure 4
<p>(<b>a</b>) Balans in play mode. (<b>b</b>) Sien slides her finger toward the speaker. (<b>c</b>) Balans plays the recorded message.</p>
Full article ">Figure 5
<p>Concept for a wall thermostat.</p>
Full article ">Figure 6
<p>Furo.</p>
Full article ">Figure 7
<p>(<b>a</b>) The red fluid is displayed on the horizontal surface. (<b>b</b>) The user pulls the slider up. (<b>c</b>) The desired drinking volume is reached.</p>
Full article ">Figure 8
<p>(<b>a</b>) A position is indicated on the horizontal surface. (<b>b</b>) The user places a glass bowl. (<b>c</b>) The fluid flows out of the tap in the bowl.</p>
Full article ">Figure 9
<p>(<b>a</b>) As the bowl fills up, the slider moves down. (<b>b</b>) The user takes the bowl. (<b>c</b>) The red fluid reappears on the horizontal surface.</p>
Full article ">Figure 10
<p>Opus.</p>
Full article ">Figure 11
<p>(<b>a</b>) The user inserts the data carrier. (<b>b</b>) The data carrier slides into the terminal, and transforms into an on-screen rectangle. (<b>c</b>) The on-screen rectangle moves toward the data storage device.</p>
Full article ">Figure 12
<p>(<b>a</b>) The push button moves up. (<b>b</b>) The user pushes the button. (<b>c</b>) The blue rectangle moves out of the terminal’s display.</p>
Full article ">Figure 13
<p>(<b>a</b>) The white outline moves to the user. (<b>b</b>) The outline transforms into the data carrier. (<b>c</b>) The user takes the data carrier out of the terminal.</p>
Full article ">Figure 14
<p>(<b>a</b>) First sketches. (<b>b</b>) Different shapes offer different action possibilities.</p>
Full article ">Figure 15
<p>(<b>a</b>) First cardboard model. (<b>b</b>) First projections.</p>
Full article ">Figure 16
<p>(<b>a</b>) Sketches of the enlarged side plan. (<b>b</b>) A second physical model, with projections.</p>
Full article ">Figure 17
<p>The image of the post horn appears in the sketches.</p>
Full article ">
12 pages, 7176 KiB  
Article
Abrasive Wear Characteristics of 30MnB5 Steel for High-Speed Plough Tip of Agricultural Machinery in Southern Xinjiang Region
by Xiaorui Han, Qiang Yao, Mingjian Li, Zhanhong Guo, Pengwei Fan, Ling Zhou and Youqiang Zhang
Lubricants 2024, 12(11), 367; https://doi.org/10.3390/lubricants12110367 - 24 Oct 2024
Viewed by 628
Abstract
The high-speed plough tip is the core soil-touching component in southern Xinjiang field cultivation, but the interaction of the plough tip with the soil results in severe wear of the tip. The friction behaviour of sand and soil on plough tips was investigated [...] Read more.
The high-speed plough tip is the core soil-touching component in southern Xinjiang field cultivation, but the interaction of the plough tip with the soil results in severe wear of the tip. The friction behaviour of sand and soil on plough tips was investigated with a homemade rotary abrasive wear tester in a one-factor multilevel test with three parameters: moisture content, velocity/rotational speed and friction distance. The objective was to study the friction behaviour of the sand soil and plough tip and analyse and characterise the wear amount, wear thickness and compressive stress distribution, three-dimensional wear morphology and microscopic wear morphology of the plough tips. The results show that with increasing speed, the wear amount changes more gently; with increasing soil water content, the soil adhesion force and lubricating water film increase so that the wear amount follows a second-order parabolic law; and with increasing friction distance, the wear amount gradually increases, and the wear rate also shows an upward trend when the plough tip is in the abrasive wear stage. The tip makes contact with the firmer soil with higher surface compressive stresses, causing the most wear. As the friction distance increases, sand particles become embedded in the contact surfaces, creating a groove effect along with spalling pits caused by fatigue wear. During the whole wear period, the groove effect is always accompanied by spalling pits appearing repeatedly. The analysis of the wear micromorphology of the plough tip shows that the number of flaking pits gradually decreases in the direction of soil movement, and the form of damage changes from impact wear to plough groove scratches. Abrasive wear interacts with corrosive wear to exacerbate plough tip wear. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) LK500 high-speed hydraulic turning plough; (<b>b</b>) plough tip.</p>
Full article ">Figure 2
<p>Schematic diagram of discrete-element simulation.</p>
Full article ">Figure 3
<p>(<b>a</b>) Rotary abrasive wear tester; (<b>b</b>) structure of the tester; (<b>c</b>) principle of plough tip wear; (<b>d</b>) worn plough tip and sample.</p>
Full article ">Figure 4
<p>Plough tip wear amount.</p>
Full article ">Figure 5
<p>(<b>a</b>) Plot of wear thickness deviation of the plough tip. (<b>b</b>) Plot of compressive stress distribution on the surface of the plough tip.</p>
Full article ">Figure 6
<p>The plough tip projection variation diagram.</p>
Full article ">Figure 7
<p>Three-dimensional wear topography for different friction distances: (<b>a</b>) original surface; (<b>b</b>) 113 km; (<b>c</b>) 226 km; and (<b>d</b>) 339 km. (<b>e</b>) Tw-dimensional cross-section curves for 226 km 3D wear surface; (<b>f</b>) 2D cross-section curves for 339 km 3D wear surface.</p>
Full article ">Figure 8
<p>Wear surfaces: (<b>a</b>) cutting edge; (<b>b</b>) middle of the sample; and (<b>c</b>) top of the sample. (<b>d</b>) Soil grain indentation; (<b>e</b>) soil grain rolling tooth marks.</p>
Full article ">Figure 9
<p>Corrosive wear and EDS analysis of (<b>a</b>) salt accumulation; (<b>b</b>) crystals; and (<b>c</b>) corrosion traces; (<b>d</b>) analysis of the composition of the crystals; (<b>e</b>) analysis of corrosion products.</p>
Full article ">
24 pages, 6838 KiB  
Article
Affective Stroking: Design Thermal Mid-Air Tactile for Assisting People in Stress Regulation
by Sheng He, Hao Zeng, Mengru Xue, Guanghui Huang, Cheng Yao and Fangtian Ying
Appl. Sci. 2024, 14(20), 9494; https://doi.org/10.3390/app14209494 - 17 Oct 2024
Viewed by 842
Abstract
Haptics for stress regulation is well developed these years. Using vibrotactile to present biofeedback, guiding breathing or heartbeat regulation is a dominant technical approach. However, designing computer-mediated affective touch for stress regulation is also a promising way and has not been fully explored. [...] Read more.
Haptics for stress regulation is well developed these years. Using vibrotactile to present biofeedback, guiding breathing or heartbeat regulation is a dominant technical approach. However, designing computer-mediated affective touch for stress regulation is also a promising way and has not been fully explored. In this paper, a haptic device was developed to test whether the computer-mediated affective stroking on the forearm could help to assist people in reducing stress. In our method, we used mid-air technology to generate subtle pressure force by blowing air and generating thermal feedback by using Peltier elements simultaneously. Firstly, we found intensity and velocity parameters to present comfort and pleasant stroking sensations. Afterward, an experiment was conducted to find out whether this approach could help people mediate their perceived and physiological stress. A total of 49 participants were randomly assigned to either a Stroking Group (SG) or a Control Group (CG). Results showed that participants from SG felt more relaxed than those from CG. The physiological stress index, RMSSD, increased and LF/HF decreased in SG although these changes were not statistically significant. Our exploration created subtle, non-invasive, noiseless haptic sensations. It could be a promising alternative for assisting people in stress regulation. Design implications and future applicable scenarios were discussed. Full article
(This article belongs to the Special Issue Emerging Technologies of Human-Computer Interaction)
Show Figures

Figure 1

Figure 1
<p>Hardware design: (<b>a</b>) hardware of the stroking device and (<b>b</b>) framework of the whole system.</p>
Full article ">Figure 2
<p>Arrangements of 8 fans and blowing-air pressure on the skin to imitate stroking sensation.</p>
Full article ">Figure 3
<p>Fans around the outer and upper sides of the arm, blowing air pressure on the forearm.</p>
Full article ">Figure 4
<p>The two-way interaction of Intensity × Duration on four metrics: (<b>a</b>) perceived continuity; (<b>b</b>) perceived authenticity; (<b>c</b>) perceived comfort; and (<b>d</b>) perceived pleasantness.</p>
Full article ">Figure 5
<p>Force generated by a single fan blowing once for 3 s (<b>a</b>) and 5 s (<b>b</b>).</p>
Full article ">Figure 6
<p>Comparison of pressure parameters caused by air and human touch for fast stroking.</p>
Full article ">Figure 7
<p>Comparison of pressure parameters caused by air and human touch for slow stroking.</p>
Full article ">Figure 8
<p>Experiment design: (<b>a</b>) experiment setup and (<b>b</b>) experiment environment.</p>
Full article ">Figure 9
<p>Experiment Procedure.</p>
Full article ">Figure 10
<p>STAI scores: (<b>a</b>) mean scores across three phases and (<b>b</b>) changes during the relaxation phase. (*: <span class="html-italic">p</span> &lt; 0.050).</p>
Full article ">Figure 11
<p>RMSSD metrics: (<b>a</b>) mean RMSSD metrics across three phases and (<b>b</b>) changes in RMSSD metrics during the relaxation phase.</p>
Full article ">Figure 12
<p>LF/HF metrics: (<b>a</b>) mean LF/HF metrics across three phases and (<b>b</b>) changes in LF/HF metrics during the relaxation phase. (*: abnormal value).</p>
Full article ">
17 pages, 8226 KiB  
Article
Design of a Capacitive Tactile Sensor Array System for Human–Computer Interaction
by Fei Fei, Zhenkun Jia, Changcheng Wu, Xiong Lu and Zhi Li
Sensors 2024, 24(20), 6629; https://doi.org/10.3390/s24206629 - 14 Oct 2024
Viewed by 752
Abstract
This paper introduces a novel capacitive sensor array designed for tactile perception applications. Utilizing an all-in-one inkjet deposition printing process, the sensor array exhibited exceptional flexibility and accuracy. With a resolution of up to 32.7 dpi, the sensor array was capable of capturing [...] Read more.
This paper introduces a novel capacitive sensor array designed for tactile perception applications. Utilizing an all-in-one inkjet deposition printing process, the sensor array exhibited exceptional flexibility and accuracy. With a resolution of up to 32.7 dpi, the sensor array was capable of capturing the fine details of touch inputs, making it suitable for applications requiring high spatial resolution. The design incorporates two multiplexers to achieve a scanning rate of 100 Hz, ensuring the rapid and responsive data acquisition that is essential for real-time feedback in interactive applications, such as gesture recognition and haptic interfaces. To evaluate the performance of the capacitive sensor array, an experiment that involved handwritten number recognition was conducted. The results demonstrated that the sensor accurately captured fingertip inputs with a high precision. When combined with an Auxiliary Classifier Generative Adversarial Network (ACGAN) algorithm, the sensor system achieved a recognition accuracy of 98% for various handwritten numbers from “0” to “9”. These results show the potential of the capacitive sensor array for advanced human–computer interaction applications. Full article
(This article belongs to the Section Sensors Development)
Show Figures

Figure 1

Figure 1
<p>Sensor fabrication process: (<b>a</b>) a polyester film as a printing substrate; (<b>b</b>) PVA coating onto the polyester film; (<b>c</b>) printing process of the row electrode; (<b>d</b>) printing process of the column electrode; (<b>e</b>) printing process of the interconnects; (<b>f</b>) soldering of electronic components, and (<b>g</b>) the final fabricated sensor device.</p>
Full article ">Figure 2
<p>Design of the sensor system. (<b>a</b>) A 16 × 16 capacitive sensor with a pixel resolution of 32.7 dpi. (<b>b</b>) A 0.4 mm × 0.4 mm diamond sensing element and interconnects of 0.1 mm.</p>
Full article ">Figure 3
<p>Demonstration of the capacitive tactical sensor system. (<b>a</b>) The hardware system includes the capacitive sensor, the Arduino controller, the capacitance measurement module, and two multiplexers. (<b>b</b>) High-resolution micro-capacitive array. (<b>c</b>) Connection of the hardware system.</p>
Full article ">Figure 4
<p>Comparison of abnormal and normal capacitance values.</p>
Full article ">Figure 5
<p>(<b>a</b>) The dielectric between the emitter and the receiver is the photopolymer and air without the finger touching. (<b>b</b>) The dielectric between the emitter and the receiver is the photopolymer, air, and the finger. (<b>c</b>) Change in the sensor capacitance before and after finger touching.</p>
Full article ">Figure 6
<p>Sequence of capacitance values corresponding to the number “0” trajectory.</p>
Full article ">Figure 7
<p>Sliding motion with the index finger and the visualized trajectory of the number “0”.</p>
Full article ">Figure 8
<p>Visualized trajectories from numbers “1” to “9”.</p>
Full article ">Figure 9
<p>The training processes of the (<b>a</b>) GAN, (<b>b</b>) CGAN, and (<b>c</b>) ACGAN.</p>
Full article ">Figure 10
<p>The generator model of the ACGAN.</p>
Full article ">Figure 11
<p>The discriminator model of the ACGAN.</p>
Full article ">Figure 12
<p>The trajectory heatmaps of the numbers “0–9”: (<b>a</b>) trajectory heatmaps obtained by drawing numbers on the capacitive sensor using a finger; (<b>b</b>) trajectory heatmaps generated using a GAN model’s generator.</p>
Full article ">Figure 13
<p>The (<b>a</b>) loss and (<b>b</b>) auxiliary loss of the model, as well as the confusion matrix for the discriminator model on the (<b>c</b>) validation set and (<b>d</b>) fake image set.</p>
Full article ">
18 pages, 332 KiB  
Article
Unveiling Superstition in Vieste: Popular Culture and Ecclesiastical Tribunals in the 18th-Century Kingdom of Naples
by Francesca Vera Romano
Religions 2024, 15(10), 1202; https://doi.org/10.3390/rel15101202 - 2 Oct 2024
Cited by 1 | Viewed by 723 | Correction
Abstract
This study aims to analyse two trials involving magic, superstition, exorcism, and witchcraft, which were held in 1713 in the Diocese of Vieste (present-day Apulia), Kingdom of Naples. It aims to illuminate the dynamics between the Church, magical practices, and the territorial context, [...] Read more.
This study aims to analyse two trials involving magic, superstition, exorcism, and witchcraft, which were held in 1713 in the Diocese of Vieste (present-day Apulia), Kingdom of Naples. It aims to illuminate the dynamics between the Church, magical practices, and the territorial context, providing insights into this less-explored period in inquisition history when the Catholic Church’s fight against superstition was beginning to wane. The first trial against Rita di Ruggiero is very rich in detail, giving us a clear vision of which magical practices were used during the Modern Age. Additionally, it touches, albeit only marginally, on a theme that will be crucial for the duration of these practices in the Kingdom of Naples: the complex interactions between state and ecclesiastical authorities. The second 1713 trial involving Elisabetta Del Vecchio explores accusations of bewitchment, contributing to our understanding of witchcraft paradigms. Full article
30 pages, 2512 KiB  
Article
Societal Perceptions and Acceptance of Virtual Humans: Trust and Ethics across Different Contexts
by Michael Gerlich
Soc. Sci. 2024, 13(10), 516; https://doi.org/10.3390/socsci13100516 - 29 Sep 2024
Viewed by 2122
Abstract
This article examines public perceptions of virtual humans across various contexts, including social media, business environments, and personal interactions. Using an experimental approach with 371 participants in the United Kingdom, this research explores how the disclosure of virtual human technology influences trust, performance [...] Read more.
This article examines public perceptions of virtual humans across various contexts, including social media, business environments, and personal interactions. Using an experimental approach with 371 participants in the United Kingdom, this research explores how the disclosure of virtual human technology influences trust, performance perception, usage likelihood, and overall acceptance. Participants interacted with virtual humans in simulations, initially unaware of their virtual nature, and then completed surveys to capture their perceptions before and after disclosure. The results indicate that trust and acceptance are higher in social media contexts, whereas business and general settings reveal significant negative shifts post-disclosure. Trust emerged as a critical factor influencing overall acceptance, with social media interactions maintaining higher levels of trust and performance perceptions than business environments and general interactions. A qualitative analysis of open-ended responses and follow-up interviews highlights concerns about transparency, security, and the lack of human touch. Participants expressed fears about data exploitation and the ethical implications of virtual human technology, particularly in business and personal settings. This study underscores the importance of ethical guidelines and transparent protocols to enhance the adoption of virtual humans in diverse sectors. These findings offer valuable insights for developers, marketers, and policymakers to optimise virtual human integration while addressing societal apprehensions, ultimately contributing to more effective and ethical deployment of virtual human technologies. Full article
Show Figures

Figure 1

Figure 1
<p>VH using VASA-1 (<a href="#B64-socsci-13-00516" class="html-bibr">Xu et al. 2024</a>).</p>
Full article ">Figure 2
<p>Q-Q Plot of trust before and after disclosure.</p>
Full article ">Figure 3
<p>Q-Q Plot of usage acceptance before and after disclosure.</p>
Full article ">Figure 4
<p>Q-Q plot of trust.</p>
Full article ">Figure 5
<p>Performance Q-Q plot EX3.</p>
Full article ">Figure 6
<p>Usage likelihood.</p>
Full article ">Figure 7
<p>Overall acceptance Q-Q plot.</p>
Full article ">
Back to TopTop