[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Issue
Volume 7, August
Previous Issue
Volume 7, June
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 

Multimodal Technol. Interact., Volume 7, Issue 7 (July 2023) – 12 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
20 pages, 636 KiB  
Article
The Impact of Mobile Learning on Students’ Attitudes towards Learning in an Educational Technology Course
by Reham Salhab and Wajeeh Daher
Multimodal Technol. Interact. 2023, 7(7), 74; https://doi.org/10.3390/mti7070074 - 20 Jul 2023
Cited by 7 | Viewed by 9484
Abstract
As technology has explosively and globally revolutionized the teaching and learning processes at educational institutions, enormous and innovative technological developments, along with their tools and applications, have recently invaded the education system. Using mobile learning (m-learning) employs wireless technologies for thinking, communicating, learning, [...] Read more.
As technology has explosively and globally revolutionized the teaching and learning processes at educational institutions, enormous and innovative technological developments, along with their tools and applications, have recently invaded the education system. Using mobile learning (m-learning) employs wireless technologies for thinking, communicating, learning, and sharing to disseminate and exchange knowledge. Consequently, assessing the learning attitudes of students toward mobile learning is crucial, as learning attitudes impact their motivation, performance, and beliefs about mobile learning. However, mobile learning seems under-researched and may require additional efforts from researchers, especially in the context of the Middle East. Hence, this study’s contribution is enhancing our knowledge about students’ attitudes towards mobile-based learning. Therefore, the study goal was to investigate m-learning’s effect on the learning attitudes among technology education students. An explanatory sequential mixed approach was utilized to examine the attitudes of 50 students who took an educational technology class. A quasi-experiment was conducted and a phenomenological approach was adopted. Data from the experimental group and the control group were gathered. Focus group discussions with three groups and 25 semi-structured interviews were performed with students who experienced m-learning in their course. ANCOVA was conducted and revealed the impact of m-learning on the attitudes and their components. An inductive and deductive content analysis was conducted. Eleven subthemes stemmed out of three main themes. These subthemes included: personalized learning, visualization of learning motivation, less learning frustration, enhancing participation, learning on familiar devices, and social interaction, which emerged from the data. The researchers recommended that higher education institutions adhere to a set of guiding principles when creating m-learning policies. Additionally, they should customize the m-learning environment with higher levels of interactivity to meet students’ needs and learning styles to improve their attitudes towards m-learning. Full article
Show Figures

Figure 1

Figure 1
<p>Explanatory sequential design phases as cited by [<a href="#B43-mti-07-00074" class="html-bibr">43</a>].</p>
Full article ">
32 pages, 1273 KiB  
Review
Encoding Variables, Evaluation Criteria, and Evaluation Methods for Data Physicalisations: A Review
by Champika Ranasinghe and Auriol Degbelo
Multimodal Technol. Interact. 2023, 7(7), 73; https://doi.org/10.3390/mti7070073 - 18 Jul 2023
Cited by 8 | Viewed by 3037
Abstract
Data physicalisations, or physical visualisations, represent data physically, using variable properties of physical media. As an emerging area, Data physicalisation research needs conceptual foundations to support thinking about and designing new physical representations of data and evaluating them. Yet, it remains unclear at [...] Read more.
Data physicalisations, or physical visualisations, represent data physically, using variable properties of physical media. As an emerging area, Data physicalisation research needs conceptual foundations to support thinking about and designing new physical representations of data and evaluating them. Yet, it remains unclear at the moment (i) what encoding variables are at the designer’s disposal during the creation of physicalisations, (ii) what evaluation criteria could be useful, and (iii) what methods can be used to evaluate physicalisations. This article addresses these three questions through a narrative review and a systematic review. The narrative review draws on the literature from Information Visualisation, HCI and Cartography to provide a holistic view of encoding variables for data. The systematic review looks closely into the evaluation criteria and methods that can be used to evaluate data physicalisations. Both reviews offer a conceptual framework for researchers and designers interested in designing and evaluating data physicalisations. The framework can be used as a common vocabulary to describe physicalisations and to identify design opportunities. We also proposed a seven-stage model for designing and evaluating physical data representations. The model can be used to guide the design of physicalisations and ideate along the stages identified. The evaluation criteria and methods extracted during the work can inform the assessment of existing and future data physicalisation artefacts. Full article
Show Figures

Figure 1

Figure 1
<p>Dynamic variables illustrated: numbers (e.g., 1, 3, and 5) stand for examples of representational states, and the space between them stands for a time interval. (<b>a</b>) Two examples of perception times; (<b>b</b>) two examples of temporal orders (chronological, reverse chronological); (<b>c</b>) two examples of duration; (<b>d</b>) two examples of temporal frequency; (<b>e</b>) two examples of rates of changes; (<b>f</b>) two examples of synchronizations (lags t1 and t2) between two time series.</p>
Full article ">Figure 2
<p>Examples of encoding variables from papers of the systematic review. <span class="html-italic">Physical</span>: different types of material are used to represent the users’ core academic interests (Yellow stands here for ‘folding paper’) and their additional research interests (Orange stands here for ‘acrylic’). For the original figure, see [<a href="#B66-mti-07-00073" class="html-bibr">66</a>]. <span class="html-italic">Visual</span>: the average effort of users during a running segment is encoded as the length of a pin on the board [<a href="#B67-mti-07-00073" class="html-bibr">67</a>]. <span class="html-italic">Haptic</span>: indoor air quality data is encoded as vibration in the haptic probe from [<a href="#B68-mti-07-00073" class="html-bibr">68</a>]. <span class="html-italic">Sonic</span>: the muscle tension of flutists is used to create live water sounds as they play their flutes [<a href="#B69-mti-07-00073" class="html-bibr">69</a>]. <span class="html-italic">Olfactory</span>: the fan’s speed is used to control the airflow rate [<a href="#B70-mti-07-00073" class="html-bibr">70</a>]. <span class="html-italic">Dynamic</span>: the LED ring encircling the device fades in/out slowly or quickly to convey if the overall emotional experience of a participant is positive or negative [<a href="#B71-mti-07-00073" class="html-bibr">71</a>].</p>
Full article ">Figure 3
<p>Paper screening procedure.</p>
Full article ">Figure 4
<p>Evaluation criteria used to evaluate physicalisations with casual and utilitarian intents.</p>
Full article ">Figure 5
<p>A model connecting the dimensions investigated during the systematic review. Blue arrows indicate a statistically significant association between two dimensions. The interaction dimension is coloured grey because we did not study this dimension in our systematic review. The process is iterative, but arrows describing iterations are omitted in the figure to ease readability.</p>
Full article ">
16 pages, 8295 KiB  
Article
Experiencing Authenticity of the House Museums in Hybrid Environments
by Alessandra Miano and Marco Borsotti
Multimodal Technol. Interact. 2023, 7(7), 72; https://doi.org/10.3390/mti7070072 - 18 Jul 2023
Viewed by 1730
Abstract
The paper presents an existing scenario related to the advanced integration of digital technologies in the field of house museums, based on the critical literature and applied experimentation. House museums are a particular type of heritage site, in which is highlighted the tension [...] Read more.
The paper presents an existing scenario related to the advanced integration of digital technologies in the field of house museums, based on the critical literature and applied experimentation. House museums are a particular type of heritage site, in which is highlighted the tension between the evocative capacity of the spaces and the requirements for preservation. In this dimension, the use of a seamless approach amplifies the atmospheric component of the space, superimposing, through hybrid digital technologies, an interactive, context-driven layer in an open dialogue between digital and physical. The methodology moves on the one hand from the literature review, framing the macro themes of research, and on the other from the overview of case studies, selected on the basis of the experiential value of the space. The analysis of the selected cases followed as criteria: the formal dimension of the technology; the narrative plot, as storytelling of socio-cultural atmosphere or identification within the intimate story; and the involvement of visitors as individual immersion or collective rituality. The paper aimed at outlining a developmental panorama in which the integration of hybrid technologies points to a new seamless awareness within application scenarios as continuous and work-in-progress challenges. Full article
(This article belongs to the Special Issue Critical Reflections on Digital Humanities and Cultural Heritage)
Show Figures

Figure 1

Figure 1
<p>Installation “<span class="html-italic">du feasts and tambù</span>” (photo: Rick Mando).</p>
Full article ">Figure 2
<p>(<b>a</b>) Talking painting; (<b>b</b>) the family dinner (photo: Alessandra Miano).</p>
Full article ">Figure 3
<p>(<b>a</b>) General view of the entrance; (<b>b</b>) The living room (photo: Alessandra Miano).</p>
Full article ">
14 pages, 1308 KiB  
Article
Would You Hold My Hand? Exploring External Observers’ Perception of Artificial Hands
by Svenja Y. Schött, Patricia Capsi-Morales, Steeven Villa, Andreas Butz and Cristina Piazza
Multimodal Technol. Interact. 2023, 7(7), 71; https://doi.org/10.3390/mti7070071 - 17 Jul 2023
Cited by 1 | Viewed by 1693
Abstract
Recent technological advances have enabled the development of sophisticated prosthetic hands, which can help their users to compensate lost motor functions. While research and development has mostly addressed the functional requirements and needs of users of these prostheses, their broader societal perception (e.g., [...] Read more.
Recent technological advances have enabled the development of sophisticated prosthetic hands, which can help their users to compensate lost motor functions. While research and development has mostly addressed the functional requirements and needs of users of these prostheses, their broader societal perception (e.g., by external observers not affected by limb loss themselves) has not yet been thoroughly explored. To fill this gap, we investigated how the physical design of artificial hands influences the perception by external observers. First, we conducted an online study (n = 42) to explore the emotional response of observers toward three different types of artificial hands. Then, we conducted a lab study (n = 14) to examine the influence of design factors and depth of interaction on perceived trust and usability. Our findings indicate that some design factors directly impact the trust individuals place in the system’s capabilities. Furthermore, engaging in deeper physical interactions leads to a more profound understanding of the underlying technology. Thus, our study shows the crucial role of the design features and interaction in shaping the emotions around, trust in, and perceived usability of artificial hands. These factors ultimately impact the overall perception of prosthetic systems and, hence, the acceptance of these technologies in society. Full article
(This article belongs to the Special Issue Challenges in Human-Centered Robotics)
Show Figures

Figure 1

Figure 1
<p>Overview of the protocol and assessments included in this study. Note that the same procedure is repeated for the three devices in a randomized order. SAM = self-assessment manikin, UEQ = User Experience Questionnaire; MDMT = Multi-Dimensional Conception and Measure of Human–Robot Trust.</p>
Full article ">Figure 2
<p>The different types of artificial hands used in the study: the qb SoftHand QBRobotics), the iLimb-Ultra (Ossur), and the VariPlus Speed (Ottobock).</p>
Full article ">Figure 3
<p>Results of the SAM scale for the online survey. The emotional response to static images of the three artificial hand designs iLimb-Ultra (red), qb SoftHand (blue), and VariPlus Speed (green) is evaluated on a 9-point Likert scale. The asterisk indicates a significant difference between pairs.</p>
Full article ">Figure 4
<p>Results of the six dimensions of user experience measured with the UEQ. Raw data have been transformed to a range from <math display="inline"><semantics><mrow><mo>−</mo><mn>3</mn></mrow></semantics></math> (negative user experience) to +3 (positive user experience). Each figure depicts the positive and negative attitudes toward the three artificial hands: iLimb-Ultra in red, the qb SoftHand in blue, and the VariPlus Speed in green. The first interaction is shown on the left and the second interaction on the right. The asterisk indicates a significant difference between pairs.</p>
Full article ">Figure 5
<p>Results of the four dimensions of trust measured with the MDMTon a 7-point Likert scale for the three artificial hands: iLimb-Ultra (red), the qb SoftHand (blue), and the VariPlus Speed (green). Each figure displays the first interaction with an artificial hand on the left (shaded) and the second interaction on the right (plain). The asterisk indicates a significant difference between pairs.</p>
Full article ">
24 pages, 14309 KiB  
Article
Exploring the Educational Value and Impact of Vision-Impairment Simulations on Sympathy and Empathy with XREye
by Katharina Krösl, Marina Lima Medeiros, Marlene Huber, Steven Feiner and Carmine Elvezio
Multimodal Technol. Interact. 2023, 7(7), 70; https://doi.org/10.3390/mti7070070 - 6 Jul 2023
Cited by 1 | Viewed by 2349
Abstract
To create a truly accessible and inclusive society, we need to take the more than 2.2 billion people with vision impairments worldwide into account when we design our cities, buildings, and everyday objects. This requires sympathy and empathy, as well as a certain [...] Read more.
To create a truly accessible and inclusive society, we need to take the more than 2.2 billion people with vision impairments worldwide into account when we design our cities, buildings, and everyday objects. This requires sympathy and empathy, as well as a certain level of understanding of the impact of vision impairments on perception. In this study, we explore the potential of an extended version of our vision-impairment simulation system XREye to increase sympathy and empathy and evaluate its educational value in an expert study with 56 educators and education students. We include data from a previous study in related work on sympathy and empathy as a baseline for comparison with our data. Our results show increased sympathy and empathy after experiencing XREye and positive feedback regarding its educational value. Hence, we believe that vision-impairment simulations, such as XREye, have merit to be used for educational purposes in order to increase awareness for the challenges people with vision impairments face in their everyday lives. Full article
Show Figures

Figure 1

Figure 1
<p>Side-by-side view of the unmodified VR view (<b>left</b> eye) and simulated myopia (<b>right</b> eye).</p>
Full article ">Figure 2
<p>Side-by-side view of the unmodified 360 ° image view (<b>left</b> eye) and simulated cornea disease (<b>right</b> eye).</p>
Full article ">Figure 3
<p>Side-by-side view of the unmodified AR view (<b>left</b> eye) and simulated wet AMD (<b>right</b> eye).</p>
Full article ">Figure 4
<p>Side-by-side view of the unmodified 360° image view (<b>left</b> eye) and simulated complete achromatopsia (<b>right</b> eye).</p>
Full article ">Figure 5
<p>Setting of the user study.</p>
Full article ">Figure 6
<p>Comparison of responses of Guarese et al. [<a href="#B42-mti-07-00070" class="html-bibr">42</a>] pre-test participants, spectators, and users to adapted sympathy statements ARS-2 (Item 8 in expert study questionnaire, see <a href="#mti-07-00070-t001" class="html-table">Table 1</a>) “[<span class="html-italic">…</span>] <span class="html-italic">I understood what is bothering blind and visually impaired people in their day-to-day tasks</span>”. Results are aligned by positive ratings and include the number of respondents (N), median (M), and standard deviation (ST).</p>
Full article ">Figure 7
<p>Comparison of responses of Guarese et al. [<a href="#B42-mti-07-00070" class="html-bibr">42</a>] pre-test participants, spectators, and users to adapted sympathy statements ARS-5 (Item 9 in expert study questionnaire, see <a href="#mti-07-00070-t001" class="html-table">Table 1</a>) “<span class="html-italic">I was able to recognize the problems that blind and visually impaired people have</span> [<span class="html-italic">…</span>].” Results are aligned by positive ratings, and include the number of respondents (N), median (M), and standard deviation (ST).</p>
Full article ">Figure 8
<p>Comparison of responses of Guarese et al. [<a href="#B42-mti-07-00070" class="html-bibr">42</a>] pre-test participants, spectators, and users to adapted empathy statements ARE-3 (Item 10 in expert study questionnaire, see <a href="#mti-07-00070-t001" class="html-table">Table 1</a>) “[<span class="html-italic">…</span>] <span class="html-italic">I felt as though I had a visual impairment</span>.” Results are aligned by positive ratings and include the number of respondents (N), median (M), and standard deviation (ST).</p>
Full article ">Figure 9
<p>Boxplots of distributions of Likert-scale ratings from users, spectators, and Guarese et al. [<a href="#B42-mti-07-00070" class="html-bibr">42</a>] pre-test participants for questionnaire items 8–10. (8: “[<span class="html-italic">…</span>] <span class="html-italic">I understood what is bothering blind and visually impaired people in their day-to-day tasks.</span>”, 9: “<span class="html-italic">I was able to recognize the problems that blind and visually impaired people have</span> [<span class="html-italic">…</span>]”, 10: “[<span class="html-italic">…</span>] <span class="html-italic">I felt as though I had a visual impairment.</span>”, see <a href="#mti-07-00070-t001" class="html-table">Table 1</a> for full statements.)</p>
Full article ">Figure 10
<p>Comparison of responses from all expert-study participants, spectators, and users to Item 11 (regarding the pedagogical value, see <a href="#mti-07-00070-t001" class="html-table">Table 1</a>), aligned by positive ratings, including the number of responses (N), median (M), and standard deviation (ST).</p>
Full article ">Figure 11
<p>Comparison of responses from all expert-study participants, spectators, and users to Item 12 (regarding classroom use, see <a href="#mti-07-00070-t001" class="html-table">Table 1</a>), aligned by positive ratings, including the number of responses (N), median (M), and standard deviation (ST).</p>
Full article ">Figure 12
<p>Comparison of responses from all expert-study participants, spectators, and users to Item 13 (regarding remote use, see <a href="#mti-07-00070-t001" class="html-table">Table 1</a>), aligned by positive ratings, including the number of responses (N), median (M), and standard deviation (ST).</p>
Full article ">Figure 13
<p>Comparison of responses from all expert-study participants, spectators, and users to Item 15 (regarding adapting teaching methods, see <a href="#mti-07-00070-t001" class="html-table">Table 1</a>), aligned by positive ratings, including the number of responses (N), median (M), and standard deviation (ST).</p>
Full article ">Figure 14
<p>Boxplots of distributions of Likert-scale ratings from users and spectators for questionnaire items 11–13 (11: pedagogical value, 12: classroom use, 13: remote use) and 15 (adapting teaching methods).</p>
Full article ">
11 pages, 2573 KiB  
Article
Exploring Learning Curves in Acupuncture Education Using Vision-Based Needle Tracking
by Duy Duc Pham, Trong Hieu Luu, Le Trung Chanh Tran, Hoai Trang Nguyen Thi and Hoang-Long Cao
Multimodal Technol. Interact. 2023, 7(7), 69; https://doi.org/10.3390/mti7070069 - 6 Jul 2023
Cited by 1 | Viewed by 2132
Abstract
Measuring learning curves allows for the inspection of the rate of learning and competency threshold for each individual, training lesson, or training method. In this work, we investigated learning curves in acupuncture needle manipulation training with continuous performance measurement using a vision-based needle [...] Read more.
Measuring learning curves allows for the inspection of the rate of learning and competency threshold for each individual, training lesson, or training method. In this work, we investigated learning curves in acupuncture needle manipulation training with continuous performance measurement using a vision-based needle training system. We tracked the needle insertion depth of 10 students to investigate their learning curves. The results show that the group-level learning curve was fitted with the Thurstone curve, indicating that students were able to improve their needle insertion skills after repeated practice. Additionally, the analysis of individual learning curves revealed valuable insights into the learning experiences of each participant, highlighting the importance of considering individual differences in learning styles and abilities when designing training programs. Full article
Show Figures

Figure 1

Figure 1
<p>Generic learning curve in health profession education. The curve shape, the positions of the inflection point, and the performance levels vary among measured performances and individuals.</p>
Full article ">Figure 2
<p>The vision-based needle tracking system. (<b>A</b>) The hardware setup. (<b>B</b>) Acupuncture needle manipulation parameters. In this work, we focused on the insertion depth.</p>
Full article ">Figure 3
<p>Calculation of the needle manipulation parameters with a focus on insertion depth based on the bounding rectangle position during the insertion process. (<b>A</b>) The needle is detected. (<b>B</b>) Start frame before insertion. (<b>C</b>) End frame after insertion.</p>
Full article ">Figure 4
<p>The training of controlling the depth of acupuncture needle insertion. (<b>A</b>) A student performs needle insertion using the system. The graphical user interface displays needle manipulation parameters. Picture used with participant permission. (<b>B</b>) Four video frames demonstrate the needle insertion captured by the camera.</p>
Full article ">Figure 5
<p>Learning curves of insertion depth error for the group and individual participants using the LOESS smoothing method, with 95% confidence intervals. (<b>A</b>) Group-level learning curve. (<b>B</b>–<b>K</b>) Individual learning curves with different learning patterns. The learning curves of P2, P7, and P8 are atypical. (<b>L</b>) Group-level learning curve for individuals with typical learning curves. The dashed blue line indicates the competency level at 0.2 cm.</p>
Full article ">Figure 5 Cont.
<p>Learning curves of insertion depth error for the group and individual participants using the LOESS smoothing method, with 95% confidence intervals. (<b>A</b>) Group-level learning curve. (<b>B</b>–<b>K</b>) Individual learning curves with different learning patterns. The learning curves of P2, P7, and P8 are atypical. (<b>L</b>) Group-level learning curve for individuals with typical learning curves. The dashed blue line indicates the competency level at 0.2 cm.</p>
Full article ">Figure 6
<p>The increasing–decreasing return learning curve that goes beyond the Thurstone learning curve. Figure adapted from [<a href="#B34-mti-07-00069" class="html-bibr">34</a>].</p>
Full article ">
16 pages, 435 KiB  
Article
Using Open-Source Automatic Speech Recognition Tools for the Annotation of Dutch Infant-Directed Speech
by Anika van der Klis, Frans Adriaans, Mengru Han and René Kager
Multimodal Technol. Interact. 2023, 7(7), 68; https://doi.org/10.3390/mti7070068 - 3 Jul 2023
Cited by 2 | Viewed by 2241
Abstract
There is a large interest in the annotation of speech addressed to infants. Infant-directed speech (IDS) has acoustic properties that might pose a challenge to automatic speech recognition (ASR) tools developed for adult-directed speech (ADS). While ASR tools could potentially speed up the [...] Read more.
There is a large interest in the annotation of speech addressed to infants. Infant-directed speech (IDS) has acoustic properties that might pose a challenge to automatic speech recognition (ASR) tools developed for adult-directed speech (ADS). While ASR tools could potentially speed up the annotation process, their effectiveness on this speech register is currently unknown. In this study, we assessed to what extent open-source ASR tools can successfully transcribe IDS. We used speech data from 21 Dutch mothers reading picture books containing target words to their 18- and 24-month-old children (IDS) and the experimenter (ADS). In Experiment 1, we examined how the ASR tool Kaldi-NL performs at annotating target words in IDS vs. ADS. We found that Kaldi-NL only found 55.8% of target words in IDS, while it annotated 66.8% correctly in ADS. In Experiment 2, we aimed to assess the difficulties in annotating IDS more broadly by transcribing all IDS utterances manually and comparing the word error rates (WERs) of two different ASR systems: Kaldi-NL and WhisperX. We found that WhisperX performs significantly better than Kaldi-NL. While there is much room for improvement, the results show that automatic transcriptions provide a promising starting point for researchers who have to transcribe a large amount of speech directed at infants. Full article
(This article belongs to the Special Issue Child–Computer Interaction and Multimodal Child Behavior Analysis)
Show Figures

Figure 1

Figure 1
<p>The proportions of hits and misses for each speech register within each age group.</p>
Full article ">Figure 2
<p>Boxplots of the mean pitch of target words.</p>
Full article ">Figure 3
<p>Boxplots of the pitch range of target words.</p>
Full article ">Figure 4
<p>Boxplots of the articulation rate (syllables/s) of target words.</p>
Full article ">
17 pages, 1104 KiB  
Article
Federated Learning for Clinical Event Classification Using Vital Signs Data
by Ruzaliev Rakhmiddin and KangYoon Lee
Multimodal Technol. Interact. 2023, 7(7), 67; https://doi.org/10.3390/mti7070067 - 29 Jun 2023
Cited by 3 | Viewed by 3102
Abstract
Accurate and timely diagnosis is a pillar of effective healthcare. However, the challenge lies in gathering extensive training data while maintaining patient privacy. This study introduces a novel approach using federated learning (FL) and a cross-device multimodal model for clinical event classification based [...] Read more.
Accurate and timely diagnosis is a pillar of effective healthcare. However, the challenge lies in gathering extensive training data while maintaining patient privacy. This study introduces a novel approach using federated learning (FL) and a cross-device multimodal model for clinical event classification based on vital signs data. Our architecture employs FL to train several machine learning models including random forest, AdaBoost, and SGD ensemble models on vital signs data. The data were sourced from a diverse clientele at a Boston hospital (MIMIC-IV dataset). The FL structure trains directly on each client’s device, ensuring no transfer of sensitive data and preserving patient privacy. The study demonstrates that FL offers a powerful tool for privacy-preserving clinical event classification, with our approach achieving an impressive accuracy of 98.9%. These findings highlight the significant potential of FL and cross-device ensemble technology in healthcare applications, especially in the context of handling large volumes of sensitive patient data. Full article
Show Figures

Figure 1

Figure 1
<p>The general concept of federated learning in the healthcare system.</p>
Full article ">Figure 2
<p>Configuration diagram of FL operation that manages the FL lifecycle.</p>
Full article ">Figure 3
<p>Optimal performance achieved with ten rounds and five clients for various machine learning models.</p>
Full article ">
23 pages, 1662 KiB  
Review
A Dynamic Interactive Approach to Music Listening: The Role of Entrainment, Attunement and Resonance
by Mark Reybrouck
Multimodal Technol. Interact. 2023, 7(7), 66; https://doi.org/10.3390/mti7070066 - 28 Jun 2023
Cited by 6 | Viewed by 2398
Abstract
This paper takes a dynamic interactive stance to music listening. It revolves around the focal concept of entrainment as an operational tool for the description of fine-grained dynamics between the music as an entraining stimulus and the listener as an entrained subject. Listeners, [...] Read more.
This paper takes a dynamic interactive stance to music listening. It revolves around the focal concept of entrainment as an operational tool for the description of fine-grained dynamics between the music as an entraining stimulus and the listener as an entrained subject. Listeners, in this view, can be “entrained” by the sounds at several levels of processing, dependent on the degree of attunement and alignment of their attention. The concept of entrainment, however, is somewhat ill-defined, with distinct conceptual labels, such as external vs. mutual, symmetrical vs. asymmetrical, metrical vs. non-metrical, within-persons and between-person, and physical vs. cognitive entrainment. The boundaries between entrainment, resonance, and synchronization are also not always very clear. There is, as such, a need for a broadened approach to entrainment, taking as a starting point the concept of oscillators that interact with each other in a continuous and ongoing way, and relying on the theoretical framework of interaction dynamics and the concept of adaptation. Entrainment, in this broadened view, is seen as an adaptive process that accommodates to the music under the influence of both the attentional direction of the listener and the configurations of the sounding stimuli. Full article
Show Figures

Figure 1

Figure 1
<p>Waveform (<b>upper panel</b>) and spectrogram (<b>lower panel</b>) of the first 31 s of Busoni’s transcription of the Intermezzo of Bach’s Toccata, Adagio &amp; Fugue in C, BWV 564.</p>
Full article ">Figure 1 Cont.
<p>Waveform (<b>upper panel</b>) and spectrogram (<b>lower panel</b>) of the first 31 s of Busoni’s transcription of the Intermezzo of Bach’s Toccata, Adagio &amp; Fugue in C, BWV 564.</p>
Full article ">Figure 2
<p>Waveform and spectrogram representation of the first four bars of the aria of Bach’s Goldberg variations (<b>upper panels</b>) and score notation of the first seven bars (<b>lower panels</b>), performed by Glenn Gould.</p>
Full article ">Figure 2 Cont.
<p>Waveform and spectrogram representation of the first four bars of the aria of Bach’s Goldberg variations (<b>upper panels</b>) and score notation of the first seven bars (<b>lower panels</b>), performed by Glenn Gould.</p>
Full article ">Figure 3
<p>Example of phase and period relationship as exemplified in a sinusoidal depiction of a periodic stimulus.</p>
Full article ">
20 pages, 5531 KiB  
Article
Towards Universal Industrial Augmented Reality: Implementing a Modular IAR System to Support Assembly Processes
by Detlef Gerhard, Matthias Neges, Jan Luca Siewert and Mario Wolf
Multimodal Technol. Interact. 2023, 7(7), 65; https://doi.org/10.3390/mti7070065 - 27 Jun 2023
Viewed by 2030
Abstract
While Industrial Augmented Reality (IAR) has many applications across the whole product lifecycle, most IAR applications today are custom-built for specific use-cases in practice. This contribution builds upon a scoping literature review of IAR data representations to present a modern, modular IAR architecture. [...] Read more.
While Industrial Augmented Reality (IAR) has many applications across the whole product lifecycle, most IAR applications today are custom-built for specific use-cases in practice. This contribution builds upon a scoping literature review of IAR data representations to present a modern, modular IAR architecture. The individual modules of the presented approach are either responsible for user interface and user interaction or for data processing. They are use-case neutral and independent of each other, while communicating through a strictly separated application layer. To demonstrate the architecture, this contribution presents an assembly process that is supported once with a pick-to-light system and once using in situ projections. Both are implemented on top of the novel architecture, allowing most of the work on the individual models to be reused. This IAR architecture, based on clearly separated modules with defined interfaces, particularly allows small companies with limited personnel resources to adapt IAR for their specific use-cases more easily than developing single-use applications from scratch. Full article
Show Figures

Figure 1

Figure 1
<p>Process of the Systematic Scoping Review.</p>
Full article ">Figure 2
<p>Simplified data model of anchors and representations.</p>
Full article ">Figure 3
<p>Common Anchor Types for IAR Applications.</p>
Full article ">Figure 4
<p>System Overview with view layer, application layer, data layer and the modules connecting them.</p>
Full article ">Figure 5
<p>Steps required for preparing a system.</p>
Full article ">Figure 6
<p>Sequence diagram of the application-planning module.</p>
Full article ">Figure 7
<p>Sequence diagram for the execution of the application.</p>
Full article ">Figure 8
<p>The implemented Industrial Augmented Reality (IAR) prototypes based on the modular architecture.</p>
Full article ">
24 pages, 3267 KiB  
Review
From Earlier Exploration to Advanced Applications: Bibliometric and Systematic Review of Augmented Reality in the Tourism Industry (2002–2022)
by Mohamed Zaifri, Hamza Khalloufi, Fatima Zahra Kaghat, Ahmed Azough and Khalid Alaoui Zidani
Multimodal Technol. Interact. 2023, 7(7), 64; https://doi.org/10.3390/mti7070064 - 26 Jun 2023
Cited by 3 | Viewed by 3955
Abstract
Augmented reality has emerged as a transformative technology, with the potential to revolutionize the tourism industry. Nonetheless, there is a scarcity of studies tracing the progression of AR and its application in tourism, from early exploration to recent advancements. This study aims to [...] Read more.
Augmented reality has emerged as a transformative technology, with the potential to revolutionize the tourism industry. Nonetheless, there is a scarcity of studies tracing the progression of AR and its application in tourism, from early exploration to recent advancements. This study aims to provide a comprehensive overview of the evolution, contexts, and design elements of AR in tourism over the period (2002–2022), offering insights for further progress in this domain. Employing a dual-method approach, a bibliometric analysis was conducted on 861 articles collected from the Scopus and Web of Science databases, to investigate the evolution of AR research over time and across countries, and to identify the main contexts of the utilization of AR in tourism. In the second part of our study, a systematic content analysis was conducted, focusing on a subset of 57 selected studies that specifically employed AR systems in various tourism situations. Through this analysis, the most commonly utilized AR design components, such as tracking systems, AR devices, tourism settings, and virtual content were summarized. Furthermore, we explored how these components were integrated to enhance the overall tourism experience. The findings reveal a growing trend in research production, led by Europe and Asia. Key contexts of AR applications in tourism encompass cultural heritage, mobile AR, and smart tourism, with emerging topics such as artificial intelligence (AI), big data, and COVID-19. Frequently used AR design components comprise mobile devices, marker-less tracking systems, outdoor environments, and visual overlays. Future research could involve optimizing AR experiences for users with disabilities, supporting multicultural experiences, integrating AI with big data, fostering sustainability, and remote virtual tourism. This study contributes to the ongoing discourse on the role of AR in shaping the future of tourism in the post COVID-19 era, by providing valuable insights for researchers, practitioners, and policymakers in the tourism industry. Full article
Show Figures

Figure 1

Figure 1
<p>Process of selecting final papers guided by PRISMA 2020 framework.</p>
Full article ">Figure 2
<p>Annual scientific production.</p>
Full article ">Figure 3
<p>Geographical distribution of scientific production.</p>
Full article ">Figure 4
<p>Top 10 most productive countries.</p>
Full article ">Figure 5
<p>Top 10 most cited countries.</p>
Full article ">Figure 6
<p>Keyword co-occurrence network based on author’s keywords.</p>
Full article ">Figure 7
<p>The occurrence of keywords and their frequency over time (years) based on the authors’ keywords.</p>
Full article ">Figure 8
<p>AR design elements. (<b>a</b>) AR tracking systems. (<b>b</b>) Virtual content overlaid by AR devices. (<b>c</b>) AR devices used to deploy AR experiences. (<b>d</b>) Tourism settings supported by AR technology.</p>
Full article ">Figure 9
<p>Overview of the methods and the main findings of the study.</p>
Full article ">
19 pages, 1245 KiB  
Article
Mid-Air Gestural Interaction with a Large Fogscreen
by Vera Remizova, Antti Sand, I. Scott MacKenzie, Oleg Špakov, Katariina Nyyssönen, Ismo Rakkolainen, Anneli Kylliäinen, Veikko Surakka and Yulia Gizatdinova
Multimodal Technol. Interact. 2023, 7(7), 63; https://doi.org/10.3390/mti7070063 - 24 Jun 2023
Cited by 2 | Viewed by 2200
Abstract
Projected walk-through fogscreens have been created, but there is little research on the evaluation of the interaction performance with fogscreens. The present study investigated mid-air hand gestures for interaction with a large fogscreen. Participants (N = 20) selected objects from a fogscreen [...] Read more.
Projected walk-through fogscreens have been created, but there is little research on the evaluation of the interaction performance with fogscreens. The present study investigated mid-air hand gestures for interaction with a large fogscreen. Participants (N = 20) selected objects from a fogscreen using tapping and dwell-based gestural techniques, with and without vibrotactile/haptic feedback. In terms of Fitts’ law, the throughput was about 1.4 bps to 2.6 bps, suggesting that gestural interaction with a large fogscreen is a suitable and effective input method. Our results also suggest that tapping without haptic feedback has good performance and potential for interaction with a fogscreen, and that tactile feedback is not necessary for effective mid-air interaction. These findings have implications for the design of gestural interfaces suitable for interaction with fogscreens. Full article
Show Figures

Figure 1

Figure 1
<p>Participants interacting with a fogscreen.</p>
Full article ">Figure 2
<p>(1) User standing in front of the fogscreen; (2) fogscreen device; (3) projector; and (4) gesture sensor.</p>
Full article ">Figure 3
<p>(<b>a</b>) Haptic device, (<b>b</b>) device in the experiment environment.</p>
Full article ">Figure 4
<p>Visual feedback shown on the target: (<b>a</b>,<b>b</b>) tapping gesture; (<b>c</b>–<b>e</b>) dwell-based selection gesture.</p>
Full article ">Figure 5
<p>Two-dimensional target selection task in ISO 9241-9 [<a href="#B30-mti-07-00063" class="html-bibr">30</a>].</p>
Full article ">Figure 6
<p>Expanded formula for throughput, featuring speed (<math display="inline"><semantics> <mrow> <mn>1</mn> <mo>/</mo> <mi mathvariant="italic">MT</mi> </mrow> </semantics></math>) and accuracy (<math display="inline"><semantics> <msub> <mi mathvariant="italic">SD</mi> <mi>x</mi> </msub> </semantics></math>).</p>
Full article ">Figure 7
<p>Plots of mean values: (<b>a</b>) throughput, (<b>b</b>) movement time, and (<b>c</b>) target re-entries by selection method and feedback mode. Error bars represent <math display="inline"><semantics> <mrow> <mo>±</mo> <mn>1</mn> </mrow> </semantics></math> standard error of the means (SEMs).</p>
Full article ">Figure 8
<p>Linear regression between the movement time (<span class="html-italic">MT</span>) and index of difficulty (<span class="html-italic">ID</span>) for tapping and dwell-based gestures (<b>a</b>) without haptic feedback and (<b>b</b>) with haptic feedback.</p>
Full article ">Figure 9
<p>Participants’ mean preference ratings for the selection methods. A lower score is better.</p>
Full article ">Figure 10
<p>Subjective rating of participants’ preference for all interaction gestures, with mean markers represented by × and outlier points.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop