[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (339)

Search Parameters:
Keywords = avatar

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 2578 KiB  
Article
Dynamic Neural Network States During Social and Non-Social Cueing in Virtual Reality Working Memory Tasks: A Leading Eigenvector Dynamics Analysis Approach
by Pinar Ozel
Brain Sci. 2025, 15(1), 4; https://doi.org/10.3390/brainsci15010004 - 24 Dec 2024
Viewed by 33
Abstract
Background/Objectives: This research investigates brain connectivity patterns in reaction to social and non-social stimuli within a virtual reality environment, emphasizing their impact on cognitive functions, specifically working memory. Methods: Employing the LEiDA framework with EEG data from 47 participants, I examined dynamic brain [...] Read more.
Background/Objectives: This research investigates brain connectivity patterns in reaction to social and non-social stimuli within a virtual reality environment, emphasizing their impact on cognitive functions, specifically working memory. Methods: Employing the LEiDA framework with EEG data from 47 participants, I examined dynamic brain network states elicited by social avatars compared to non-social stick cues during a VR memory task. Through the integration of LEiDA with deep learning and graph theory analyses, unique connectivity patterns associated with cue type were discerned, underscoring the substantial influence of social cues on cognitive processes. LEiDA, conventionally utilized with fMRI, was creatively employed in EEG to detect swift alterations in brain network states, offering insights into cognitive processing dynamics. Results: The findings indicate distinct neural states for social and non-social cues; notably, social cues correlated with a unique brain state characterized by increased connectivity within self-referential and memory-processing networks, implying greater cognitive engagement. Moreover, deep learning attained approximately 99% accuracy in differentiating cue contexts, highlighting the efficacy of prominent eigenvectors from LEiDA in EEG analysis. Analysis of graph theory also uncovered structural network disparities, signifying enhanced integration in contexts involving social cues. Conclusions: This multi-method approach elucidates the dynamic influence of social cues on brain connectivity and cognition, establishing a basis for VR-based cognitive rehabilitation and immersive learning, wherein social signals may significantly enhance cognitive function. Full article
(This article belongs to the Special Issue The Application of EEG in Neurorehabilitation)
Show Figures

Figure 1

Figure 1
<p>Applied methods (LEiDA, graph theory, and deep learning classification).</p>
Full article ">Figure 2
<p>VR working memory task: selection and design schema.</p>
Full article ">Figure 3
<p>Depiction of the trial process (checkered pattern inspired by [<a href="#B45-brainsci-15-00004" class="html-bibr">45</a>]). Utilizing the parameters of the conventional central cueing paradigm, the cue persisted on the screen for the duration of the trial (e.g., [<a href="#B46-brainsci-15-00004" class="html-bibr">46</a>,<a href="#B47-brainsci-15-00004" class="html-bibr">47</a>]). Panel (<b>A</b>) shows the social avatar cue, and Panel (<b>B</b>) shows the non-social stick cue. Timings, as depicted in the figure, were synchronized across cue types. The inter-trial interval was 1000 ms, during which a fixation cross was displayed. The experiment was a free-viewing study, allowing participants to move their eyes freely. Panel (<b>C</b>) shows the six possible left and right locations for the four encoding targets.</p>
Full article ">Figure 4
<p>Extraction of EEG signal PL states. (<b>A</b>) For a given region, the EEG signal is first preprocessed. (<b>B</b>) Hilbert transformation is applied in order to acquire an analytic signal, whose phase can be represented over time and each TR (temporal resolution), which refers to the time interval between consecutive data samples, utilized for monitoring dynamic connectivity alterations. (<b>C</b>) The dPL(t) matrix quantifies the degree of phase synchronization between each pair of areas. The dominant eigenvector of the dPL(t) matrix, denoted as V(t), represents the primary direction of all phases. Each element in V(t) corresponds to the projection of the phase of each region onto V(t) (right). (<b>D</b>) The eigenvectors V(t) from all participants are combined and inputted into a k-means clustering algorithm, which separates the data points into a predetermined number of groups, k. (<b>E</b>) Every cluster centroid symbolizes a recurring PL state. dPL refers to dynamic phase-locking (Enhancing Clarity: Process Summary {1. Preprocessing →2. Hilbert Transformation →3. Dynamic Phase-Locking Matrix (dPL) →4. Leading Eigenvector Calculation →5. K-means Clustering →6. Identification of Recurrent Phase-Locking (PL) States}).</p>
Full article ">Figure 5
<p>Repertoire of functional network states assessed with LEiDA and association to working memory. For a clustering solution of k = 8, PL State #7 is significantly correlated with enhanced working memory scores (<span class="html-italic">p</span> = 0.0156, (* refers to the significant <span class="html-italic">p</span>-value)), highlighted in a red color in the row of probabilities. The error bars represent the standard error of the mean across all 47 participants. These results underscore the role of DFC when clustered into 8 states in understanding the neural underpinnings of working memory, because the states and their connectivity after clustering results are the representation of dynamic function connectivity during the working memory tasks. Heat maps of the connectivity matrix display phase-locking values (PLVs) between EEG channels under social and non-social cue conditions. Warmer hues signify elevated PLVs, denoting enhanced functional connectivity among brain regions. Examining the variations in connectivity patterns between the two conditions may elucidate areas of increased synchronization in reaction to social cues, thereby corroborating the hypothesis of cue-specific brain network activation (the nodes represent the electrode locations).</p>
Full article ">Figure 6
<p>PL state 7 significantly differs for social compared to non-social working memory dynamic response. (<b>Top</b>) PL state is represented in the cortical space, where functionally connected brain regions (represented as spheres) are colored in blue. (<b>Middle</b>) PL states are also represented as the outer product of Vc, which is a 64 × 64 matrix representing the number of electrode regions. (<b>Bottom</b>) Significant (p-FDR &lt; 0.05) differences in the percentage of occurrence between social compared to non-social working memory dynamic response. Dots represent individual data points; dark bars indicate the standard error of the mean. Analysis via non-parametric permutation-based <span class="html-italic">t</span>-test (N = 47 participants) (* refers to the significant <span class="html-italic">p</span>-value).</p>
Full article ">Figure 7
<p>Graphical representations of brain connectivity networks under social and non-social cue conditions. Each node signifies a brain region, while edges indicate substantial coherence-based connections between regions. Essential network metrics, such as clustering coefficient and degree distribution, are presented to highlight the structural disparities in network organization across conditions. A more compact or clustered network architecture indicates improved integration within specific brain networks in reaction to social stimuli.</p>
Full article ">
18 pages, 558 KiB  
Article
The Impact of Virtual Streamer Anthropomorphism on Consumer Purchase Intention: Cognitive Trust as a Mediator
by Chunyu Li and Fei Huang
Behav. Sci. 2024, 14(12), 1228; https://doi.org/10.3390/bs14121228 - 20 Dec 2024
Viewed by 425
Abstract
As an important tool for brand promotion and marketing, the status of virtual streamers is gradually improving, especially in the Chinese market with a huge Internet user base. Virtual streamer anthropomorphism has gradually become an important research content in the field of consumer [...] Read more.
As an important tool for brand promotion and marketing, the status of virtual streamers is gradually improving, especially in the Chinese market with a huge Internet user base. Virtual streamer anthropomorphism has gradually become an important research content in the field of consumer behavior. However, the specific mechanism by which the multidimensional anthropomorphic characteristics of virtual streamers affect consumer trust and purchase intention requires further investigation. Therefore, based on the avatar theory, this research explores how the anthropomorphic characteristics of virtual streamers affect consumer purchase intention through cognitive trust. The analysis was performed using SPSS 27.0 and AMOS 24.0, establishing a structural equation model. Through the analysis of questionnaire data from 503 Chinese consumers, it was found that behavioral anthropomorphism, cognitive anthropomorphism, and emotional anthropomorphism all exert a notable influence on cognitive trust. Appearance anthropomorphism and emotional anthropomorphism directly affect purchase intention, and cognitive trust has a significant impact on purchase intention. Moreover, cognitive trust fully mediates the effects of behavioral anthropomorphism and cognitive anthropomorphism on purchase intention and partially mediates the effects of emotional anthropomorphism on purchase intention. This study enriches the application of avatar theory in virtual streamers in live e-commerce and provides theoretical backing for virtual streamer development and enterprise marketing strategies. It also offers practical insights to help brands optimize virtual streamers and improve consumer participation and purchase conversion rates. Full article
Show Figures

Figure 1

Figure 1
<p>Research model.</p>
Full article ">
22 pages, 3008 KiB  
Perspective
Digital Immortality in Palaeoanthropology and Archaeology: The Rise of the Postmortem Avatar
by Caroline M. Wilkinson, Mark A. Roughley and Sarah L. Shrimpton
Heritage 2024, 7(12), 7188-7209; https://doi.org/10.3390/heritage7120332 - 15 Dec 2024
Viewed by 462
Abstract
It has been proposed that we are entering the age of postmortalism, where digital immortality is a credible option. The desire to overcome death has occupied humanity for centuries, and even though biological immortality is still impossible, recent technological advances have enabled possible [...] Read more.
It has been proposed that we are entering the age of postmortalism, where digital immortality is a credible option. The desire to overcome death has occupied humanity for centuries, and even though biological immortality is still impossible, recent technological advances have enabled possible eternal life in the metaverse. In palaeoanthropology and archaeology contexts, we are often driven by our preoccupation with visualising and interacting with ancient populations, with the production of facial depictions of people from the past enabling some interaction. New technologies and their implementation, such as the use of Artificial Intelligence (AI), are profoundly transforming the ways that images, videos, voices, and avatars of digital ancient humans are produced, manipulated, disseminated, and viewed. As facial depiction practitioners, postmortalism crosses challenging ethical territory around consent and representation. Should we create a postmortem avatar of someone from past just because it is technically possible, and what are the implications of this kind of forced immortality? This paper describes the history of the technologically mediated simulation of people, discussing the benefits and flaws of each technological iteration. Recent applications of 4D digital technology and AI to the fields of palaeoanthropological and historical facial depiction are discussed in relation to the technical, aesthetic, and ethical challenges associated with this phenomenon. Full article
Show Figures

Figure 1

Figure 1
<p>Digital facial depictions of St Nicolas (<b>left</b>) and the ancient Egyptian pharaoh, Ramesses II (<b>right</b>).</p>
Full article ">Figure 2
<p>Facial depictions of ancient Egyptians known as the Goucher (<b>left</b>) and Cohen (<b>right</b>) mummies. Facial depiction images were displayed in the exhibition, <span class="html-italic">Who Am I? Remembering the Dead Through Facial Reconstruction</span>, at Johns Hopkins Archaeological Museum in 2018.</p>
Full article ">Figure 3
<p>Original animatronic Abraham Lincoln developed by Disney. <a href="https://flickr.com/photos/79172203@N00/42346731335" target="_blank">https://flickr.com/photos/79172203@N00/42346731335</a>, accessed on 12 November 2024.</p>
Full article ">Figure 4
<p>Stills of the talking head of Robert Burns. A 3D digital facial depiction of the Scottish poet driven by performance transfer.</p>
Full article ">Figure 5
<p>High-fidelity digital avatars created using MetaHuman Creator.</p>
Full article ">Figure 6
<p>Facial depiction of female remains (dated 774–993 AD) recovered from Saint Peter’s Abbey, Ghent, Belgium. Close-up view on right. The facial depiction process involved the use of Mesh to MetaHuman to import a 3D facial reconstruction face model into MetaHuman Creator to add textures, including skin and eye colours. Clothing and hair were produced separately in Autodesk Maya, ZBrush, and Adobe Substance Painter. Image courtesy of Face Lab and Ghent University.</p>
Full article ">Figure 7
<p>Performance capture and transfer—Richard III’s digital avatar (<b>left</b>), actor (<b>centre</b>), and the actor’s digital avatar (<b>right</b>).</p>
Full article ">Figure 8
<p>An audience member interacts with the digital avatar of a Stone Age shaman. Image courtesy of Serbia Pavilion Expo 2020 Dubai.</p>
Full article ">
16 pages, 1709 KiB  
Article
Differential Infiltration of Key Immune T-Cell Populations Across Malignancies Varying by Immunogenic Potential and the Likelihood of Response to Immunotherapy
by Islam Eljilany, Sam Coleman, Aik Choon Tan, Martin D. McCarter, John Carpten, Howard Colman, Abdul Rafeh Naqash, Igor Puzanov, Susanne M. Arnold, Michelle L. Churchman, Daniel Spakowicz, Bodour Salhia, Julian Marin, Shridar Ganesan, Aakrosh Ratan, Craig Shriver, Patrick Hwu, William S. Dalton, George J. Weiner, Jose R. Conejo-Garcia, Paulo Rodriguez and Ahmad A. Tarhiniadd Show full author list remove Hide full author list
Cells 2024, 13(23), 1993; https://doi.org/10.3390/cells13231993 - 3 Dec 2024
Viewed by 722
Abstract
Background: Solid tumors vary by the immunogenic potential of the tumor microenvironment (TME) and the likelihood of response to immunotherapy. The emerging literature has identified key immune cell populations that significantly impact immune activation or suppression within the TME. This study investigated candidate [...] Read more.
Background: Solid tumors vary by the immunogenic potential of the tumor microenvironment (TME) and the likelihood of response to immunotherapy. The emerging literature has identified key immune cell populations that significantly impact immune activation or suppression within the TME. This study investigated candidate T-cell populations and their differential infiltration within different tumor types as estimated from mRNA co-expression levels of the corresponding cellular markers. Methods: We analyzed the mRNA co-expression levels of cellular biomarkers that define stem-like tumor-infiltrating lymphocytes (TILs), tissue-resident memory T-cells (TRM), early dysfunctional T-cells, late dysfunctional T-cells, activated-potentially anti-tumor (APA) T-cells and Butyrophilin 3A (BTN3A) isoforms, utilizing clinical and transcriptomic data from 1892 patients diagnosed with melanoma, bladder, ovarian, or pancreatic carcinomas. Real-world data were collected under the Total Cancer Care Protocol and the Avatar® project (NCT03977402) across 18 cancer centers. Furthermore, we compared the survival outcomes following immune checkpoint inhibitors (ICIs) based on immune cell gene expression. Results: In melanoma and bladder cancer, the estimated infiltration of APA T-cells differed significantly (p = 4.67 × 10−12 and p = 5.80 × 10−12, respectively) compared to ovarian and pancreatic cancers. Ovarian cancer had lower TRM T-cell infiltration than melanoma, bladder, and pancreatic (p = 2.23 × 10−8, 3.86 × 10−28, and 7.85 × 10−9, respectively). Similar trends were noted with stem-like, early, and late dysfunctional T-cells. Melanoma and ovarian expressed BTN3A isoforms more than other malignancies. Higher densities of stem-like TILs; TRM, early and late dysfunctional T-cells; APA T-cells; and BTN3A isoforms were associated with increased survival in melanoma (p = 0.0075, 0.00059, 0.013, 0.005, 0.0016, and 0.041, respectively). The TRM gene signature was a moderate predictor of survival in the melanoma cohort (AUROC = 0.65), with similar findings in testing independent public datasets of ICI-treated patients with melanoma (AUROC 0.61–0.64). Conclusions: Key cellular elements related to immune activation are more heavily infiltrated within ICI-responsive versus non-responsive malignancies, supporting a central role in anti-tumor immunity. In melanoma patients treated with ICIs, higher densities of stem-like TILs, TRM T-cells, early dysfunctional T-cells, late dysfunctional T-cells, APA T-cells, and BTN3A isoforms were associated with improved survival. Full article
(This article belongs to the Special Issue Cellular and Molecular Mechanisms in Immune Regulation)
Show Figures

Figure 1

Figure 1
<p>Gene expression of different infiltration T-cells among four malignancies. The boxplots demonstrate the gene expression levels of the signatures corresponding to the T-cell subtypes of interest as well as Butyrophilin 3 A (BTN3A) isoforms among four cancer types. The Y-axis represents gene expression value as a z-score, and the X-axis represents four cancer types: ovarian, bladder, pancreatic, and melanoma. The <span class="html-italic">p</span>-value threshold was 0.001. (<b>A</b>) Differential expression of stem-like tumor infiltrating lymphocytes (TILs) across four cancer types. (<b>B</b>) Expression patterns of tissue-resident memory (TRM) T-cells. (<b>C</b>) Activated-potentially anti-tumor T-cells. (<b>D</b>) Early dysfunctional T-cell. (<b>E</b>) Late dysfunctional T-cell. (<b>F</b>) Expression of Butyrophilin 3 A (BTN3A) isoforms.</p>
Full article ">Figure 1 Cont.
<p>Gene expression of different infiltration T-cells among four malignancies. The boxplots demonstrate the gene expression levels of the signatures corresponding to the T-cell subtypes of interest as well as Butyrophilin 3 A (BTN3A) isoforms among four cancer types. The Y-axis represents gene expression value as a z-score, and the X-axis represents four cancer types: ovarian, bladder, pancreatic, and melanoma. The <span class="html-italic">p</span>-value threshold was 0.001. (<b>A</b>) Differential expression of stem-like tumor infiltrating lymphocytes (TILs) across four cancer types. (<b>B</b>) Expression patterns of tissue-resident memory (TRM) T-cells. (<b>C</b>) Activated-potentially anti-tumor T-cells. (<b>D</b>) Early dysfunctional T-cell. (<b>E</b>) Late dysfunctional T-cell. (<b>F</b>) Expression of Butyrophilin 3 A (BTN3A) isoforms.</p>
Full article ">Figure 2
<p>Differential gene expression in immunotherapy responders vs. non-responders in patients with melanoma (n = 123): a box plot analysis.</p>
Full article ">Figure 3
<p>Survival probability of melanoma patients treated with immunotherapy (n = 123): impact of estimated T-cell subtype infiltration.</p>
Full article ">
17 pages, 6775 KiB  
Article
Optimized Data Transmission and Signal Processing for Telepresence Suits in Multiverse Interactions
by Artem Volkov, Ammar Muthanna, Alexander Paramonov, Andrey Koucheryavy and Ibrahim A. Elgendy
J. Sens. Actuator Netw. 2024, 13(6), 82; https://doi.org/10.3390/jsan13060082 - 29 Nov 2024
Viewed by 473
Abstract
With the rapid development of the metaverse, designing effective interfaces in virtual and augmented environments presents significant challenges. Additionally, keeping real-time sensory data flowing from users to their virtual avatars in a seamless and accurate manner is one of the biggest challenges in [...] Read more.
With the rapid development of the metaverse, designing effective interfaces in virtual and augmented environments presents significant challenges. Additionally, keeping real-time sensory data flowing from users to their virtual avatars in a seamless and accurate manner is one of the biggest challenges in this domain. To this end, this article investigates a telepresence suit as an interface for interaction within the metaverse and its virtual avatars, aiming to address the complexities of signal generation, conversion, and transmission in real-time telepresence systems. We model a telepresence suit framework that systematically generates state data and transmits it to end-points, which can be either robotic avatars or virtual representations within a metaverse environment. Through a hand movement study, we successfully minimized the volume of transmitted information, reducing traffic by over 50%, which directly decreased channel load and packet delivery delay. For instance, as channel load decreases from 0.8 to 0.4, packet delivery delay is reduced by approximately half. This optimization not only enhances system responsiveness but also improves accuracy, particularly by reducing delays and errors in high-priority signal paths, enabling more precise and reliable telepresence interactions in metaverse settings. Full article
Show Figures

Figure 1

Figure 1
<p>Telepresence suit model.</p>
Full article ">Figure 2
<p>Telepresence suit data model.</p>
Full article ">Figure 3
<p>Composition of the signal transmission route from the sensor to the actuator.</p>
Full article ">Figure 4
<p>Model of the original and restored signals.</p>
Full article ">Figure 5
<p>Typical implementation of a “slow” signal (<b>a</b>) and its energy spectrum (<b>b</b>).</p>
Full article ">Figure 6
<p>Typical form of a “fast” signal and its energy spectrum.</p>
Full article ">Figure 7
<p>Dependence of the share of “slow” signal energy on frequency.</p>
Full article ">Figure 8
<p>Dependence of the share of “fast” signal energy on frequency.</p>
Full article ">Figure 9
<p>View of the objective function when m = 2.</p>
Full article ">Figure 10
<p>A user wearing a telepresence suit, equipped with multiple sensors for capturing movement data.</p>
Full article ">Figure 11
<p>The HOLOTAR system example.</p>
Full article ">
12 pages, 3206 KiB  
Article
Precision Treatment of Metachronous Multiple Primary Malignancies Based on Constructing Patient Tumor-Derived Organoids
by Yicheng Wang, Haotian Chen, Zhijin Zhang, Yanyan He, Ji Liu, Baoshuang Zhao, Qinwan Wang, Jiangmei Xu, Shiyu Mao, Wentao Zhang, Xudong Yao and Wei Li
Biomedicines 2024, 12(12), 2708; https://doi.org/10.3390/biomedicines12122708 - 27 Nov 2024
Viewed by 630
Abstract
When a patient has two or more primary tumors, excluding the possibility of diffuse, recurrent, or metastatic, they can be defined as having multiple primary malignant neoplasms (MPMNs). Moreover, cases of three primary urinary tract tumors are very rare. Here, we reported a [...] Read more.
When a patient has two or more primary tumors, excluding the possibility of diffuse, recurrent, or metastatic, they can be defined as having multiple primary malignant neoplasms (MPMNs). Moreover, cases of three primary urinary tract tumors are very rare. Here, we reported a patient of MPMNs with four primary tumors, including three urinary tract cancers (renal cancer, prostate cancer, and bladder cancer) and lung cancer. The four tumors appeared over 13 years, and pathological results confirmed that they were all primary tumors after different surgeries. In addition, we established patient-derived organoids (PDOs) by collecting tumor specimens. Hematoxylin-eosin (H&E) staining of PDOs showed that the organoids were histopathological consistent with parental tumor. Immunohistochemistry showed that PDOs can also reflect the expression of pathological markers in patients. At the same time, PDOs may also serve as “avatars” of patients to predict sensitivity to different drugs. In summary, we reported a case of MPMNs with four primary tumors and established PDOs from its tumor specimens. A personalized treatment strategy was established based on the histopathological characteristics of the organoids. Full article
Show Figures

Figure 1

Figure 1
<p>MRI images of the prostate. The figure indicates a mass in the prostate, involving the posterior wall of the bladder, bilateral seminal vesicles, and the anterior wall of the distal rectum. Lymph nodes in the pelvic wall and inguinal area were discovered to be enlarged, accompanied by numerous abnormal signal spots in the pelvis. (<b>A</b>–<b>C</b>) T2WI images of prostate cancer in sagittal, axial, and coronal views. There is a lesion in the prostate, which appears as a low signal. (<b>D</b>) DWI images of prostate cancer, with the tumor exhibiting a high signal. (<b>E</b>) ADC images of prostate cancer, indicating a lower ADC value for the tumor. The red arrow points to the prostate cancer.</p>
Full article ">Figure 2
<p>Enhanced CT images of the kidneys. The figure indicates a 4.2 × 3.5 × 4 cm nodule in the middle part of the right kidney, with dilatation and fluid accumulation in the right renal pelvis, calyces, and the upper segment of the ureter. (<b>A</b>) Unenhanced. A solitary lobulated mass is present within the right kidney parenchyma, protruding beyond the renal contour. The mass has uneven density, with irregular low-density areas inside and unclear margins. (<b>B</b>) Enhanced arterial phase. The mass shows obvious non-homogeneous enhancement, with its density close to that of the renal cortex, and a non-enhanced necrotic area visible in the center. (<b>C</b>,<b>D</b>) Enhanced portal venous phase and enhanced venous phase. The mass demonstrates rapid delineation, with a density lower than that of the normal renal parenchyma. The red arrow points to the kidney cancer.</p>
Full article ">Figure 3
<p>Progression and corresponding surgeries of four primary tumors in the patient. The patient received a transurethral resection of bladder tumor to treat bladder cancer in September 2011. A right lobectomy and lymph node dissection for lung cancer were conducted in November 2016. Prostate cancer was diagnosed on 5 June 2023 through an ultrasound-guided prostate biopsy. A laparoscopic partial resection of the right kidney for kidney cancer treatment was performed on 6 June 2023.</p>
Full article ">Figure 4
<p>Graphical illustration of the organoid construction and prediction of drug targets. 1. The renal tumor specimens are obtained through laparoscopic partial nephrectomy. 2. Washing, slicing, and digestion to isolate tumor cells. 3. 3D cultured in Matrigel. 4. H&amp;E and IHC staining are performed on the constructed organoids to predict drug targets.</p>
Full article ">Figure 5
<p>The organoids preserve the histopathological features inherent to the original tumor tissues. (<b>A</b>) Representative H&amp;E stainning images of primary ccRCC tumors and organoids. (<b>B</b>) Representative IHC staining images of original tumors and organoids for CAIX. (<b>C</b>) Representative IHC staining images of original tumors and organoids for CD10. (<b>D</b>) Representative IHC staining images of original tumors and organoids for RCC. (<b>E</b>) Representative IHC staining images of original tumors for VEGF, and organoids stained VEGF by immunofluorescence (Green). (<b>F</b>) Representative IHC staining images of organoids for PD-L1, and organoids stained PD-L1 by immunofluorescence (Green).</p>
Full article ">
13 pages, 1871 KiB  
Article
Exploring the Psychological and Physiological Effects of Operating a Telenoid: The Preliminary Assessment of a Minimal Humanoid Robot for Mediated Communication
by Aya Nakae, Hani M. Bu-Omer, Wei-Chuan Chang, Chie Kishimoto and Hidenobu Sumioka
Sensors 2024, 24(23), 7541; https://doi.org/10.3390/s24237541 - 26 Nov 2024
Viewed by 575
Abstract
Background: As the Internet of Things (IoT) expands, it enables new forms of communication, including interactions mediated by teleoperated robots like avatars. While extensive research exists on the effects of these devices on communication partners, there is limited research on the impact on [...] Read more.
Background: As the Internet of Things (IoT) expands, it enables new forms of communication, including interactions mediated by teleoperated robots like avatars. While extensive research exists on the effects of these devices on communication partners, there is limited research on the impact on the operators themselves. This study aimed to objectively assess the psychological and physiological effects of operating a teleoperated robot, specifically Telenoid, on its human operator. Methods: Twelve healthy participants (2 women and 10 men, aged 18–23 years) were recruited from Osaka University. Participants engaged in two communication sessions with a first-time partner: face-to-face and Telenoid-mediated. Telenoid is a minimalist humanoid robot teleoperated by a participant. Blood samples were collected before and after each session to measure hormonal and oxidative markers, including cortisol, diacron reactive oxygen metabolites (d-ROMs), and the biological antioxidat activity of plasma (BAP). Psychological stress was assessed using validated questionnaires (POMS-2, HADS, and SRS-18). Results: A trend of a decrease in cortisol levels was observed during Telenoid-mediated communication, whereas face-to-face interactions showed no significant changes. Oxidative stress, measured by d-ROMs, significantly increased after face-to-face interactions but not in Telenoid-mediated sessions. Significant correlations were found between oxytocin and d-ROMs and psychological stress scores, particularly in terms of helplessness and total stress measures. However, no significant changes were observed in other biomarkers or between the two conditions for most psychological measures. Conclusions: These findings suggest that cortisol and d-ROMs may serve as objective biomarkers for assessing psychophysiological stress during robot-mediated communication. Telenoid’s minimalist design may help reduce social pressures and mitigate stress compared to face-to-face interactions. Further research with larger, more diverse samples and longitudinal designs is needed to validate these findings and explore the broader impacts of teleoperated robots. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

Figure 1
<p>Study design.</p>
Full article ">Figure 2
<p>Depiction of experimental setup in the two sessions: (<b>a</b>) Facing session and (<b>b</b>) Telenoid session.</p>
Full article ">Figure 3
<p>Changes in serum hormones and markers of oxidation/antioxidation levels; serum levels of (<b>a</b>) cortisol, (<b>b</b>) oxytocin, (<b>c</b>) D-ROMs, and (<b>d</b>) BAP, before (pre) and after (post) conversation in the Facing and Telenoid sessions. Significant differences indicated as # <span class="html-italic">p</span> &lt; 0.1 or ** <span class="html-italic">p</span> &lt; 0.01.</p>
Full article ">Figure 4
<p>Changes in the scores of each questionnaire before (pre) and after conversation (post) in the Facing and Telenoid sessions: (<b>a</b>) POMS-2, (<b>b</b>) HADS, and (<b>c</b>) SRS-18. Significant differences are indicated as # <span class="html-italic">p</span> &lt; 0.1, * <span class="html-italic">p</span> &lt; 0.05, or ** <span class="html-italic">p</span> &lt; 0.01.</p>
Full article ">Figure 4 Cont.
<p>Changes in the scores of each questionnaire before (pre) and after conversation (post) in the Facing and Telenoid sessions: (<b>a</b>) POMS-2, (<b>b</b>) HADS, and (<b>c</b>) SRS-18. Significant differences are indicated as # <span class="html-italic">p</span> &lt; 0.1, * <span class="html-italic">p</span> &lt; 0.05, or ** <span class="html-italic">p</span> &lt; 0.01.</p>
Full article ">
12 pages, 631 KiB  
Article
Evaluating Psychological Effects of Amputation Through Virtual Reality Embodiment: A Study on Anxiety and Body Appreciation
by Aina Manzano-Torra, Bruno Porras-Garcia and José Gutiérrez-Maldonado
J. Clin. Med. 2024, 13(23), 7079; https://doi.org/10.3390/jcm13237079 - 23 Nov 2024
Viewed by 628
Abstract
Background/Objectives: A high number of patients who suffer the amputation of a lower limb will present psychological problems such as anxiety, depression, and post-traumatic stress disorder after surgery. This study embodies participants in a self-avatar with a right lower-limb amputation in a [...] Read more.
Background/Objectives: A high number of patients who suffer the amputation of a lower limb will present psychological problems such as anxiety, depression, and post-traumatic stress disorder after surgery. This study embodies participants in a self-avatar with a right lower-limb amputation in a virtual reality environment. The aim was to determine if this experience increases anxiety levels compared to embodiment in a normal avatar. The study also examines whether body appreciation is related to anxiety levels. Methods: Subjects completed the Body Appreciation Scale (BAS) questionnaire before being immersed in the virtual environment, the Visual Analogue Scale for Anxiety (VAS-A) after each condition, and the Embodiment Questionnaire at the end of the experiment. Results: Univariate analysis showed that participants reported significantly higher levels of anxiety when exposed to the virtual avatar with an amputation compared to the full virtual body avatar. These results indicate that lower levels of body appreciation were associated with higher levels of anxiety across conditions, suggesting that participants with lower body appreciation experienced greater psychological maladjustment (measured by anxiety) in response to the virtual scenarios. Conclusions: The results suggest that the virtual avatar with a lower-limb amputation elicited significantly greater anxiety, and that body appreciation plays a key role in moderating this psychological response. Future research could focus on developing virtual exposure-based therapy for amputees using virtual reality to help reduce the anxiety experienced by patients during this process. Full article
(This article belongs to the Special Issue Diagnosis, Treatment, and Prognosis of Neuropsychiatric Disorders)
Show Figures

Figure 1

Figure 1
<p>Avatars’ appearance: (<b>a</b>) real-body standard avatar; (<b>b</b>) lower-limb amputation standard avatar.</p>
Full article ">Figure 2
<p>Bar chart comparing mean anxiety levels in the complete and amputated avatar conditions.</p>
Full article ">
18 pages, 371 KiB  
Review
Dialogue with Avatars in Simulation-Based Social Work Education: A Scoping Review
by Åsa Vidman and Pia Tham
Soc. Sci. 2024, 13(11), 628; https://doi.org/10.3390/socsci13110628 - 20 Nov 2024
Viewed by 625
Abstract
Virtual reality provides students with the opportunity to have simulated experiences in a safe setting and is mostly used to teach direct practice skills. One of the most advanced ways of using virtual simulation in social work education is to interact with avatars. [...] Read more.
Virtual reality provides students with the opportunity to have simulated experiences in a safe setting and is mostly used to teach direct practice skills. One of the most advanced ways of using virtual simulation in social work education is to interact with avatars. Aim: The overall aim of this scoping review was to find out what is known about the use of dialogue with avatars in virtual reality in simulation-based social work education. Materials: Using Arksey and O’Malley’s scoping review framework, 11 articles were included in this review. Results: The skills taught with the avatars varied, as did the ways of preparing students for the sessions. The training was assessed as meaningful learning in a safe and comfortable environment, offering an opportunity to train in practical skills. According to the pre- and post-tests, in several studies the students’ skills seemed to have improved after the training. The qualitative data also pointed to skill developments. Conclusion: Training with avatars seems to be a useful way of preparing students for their future profession and seems to hold great potential in preparing students for demanding situations that cannot be easily trained for in a classroom. The results also point to technical elements that would benefit from development. Full article
(This article belongs to the Special Issue Digital Intervention for Advancing Social Work and Welfare Education)
18 pages, 2568 KiB  
Article
ATGT3D: Animatable Texture Generation and Tracking for 3D Avatars
by Fei Chen and Jaeho Choi
Electronics 2024, 13(22), 4562; https://doi.org/10.3390/electronics13224562 - 20 Nov 2024
Viewed by 423
Abstract
We propose the ATGT3D an Animatable Texture Generation and Tracking for 3D Avatars, featuring the innovative design of the Eye Diffusion Module (EDM) and Pose Tracking Diffusion Module (PTDM), which are dedicated to high-quality eye texture generation and synchronized tracking of dynamic poses [...] Read more.
We propose the ATGT3D an Animatable Texture Generation and Tracking for 3D Avatars, featuring the innovative design of the Eye Diffusion Module (EDM) and Pose Tracking Diffusion Module (PTDM), which are dedicated to high-quality eye texture generation and synchronized tracking of dynamic poses and textures, respectively. Compared to traditional GAN and VAE methods, ATGT3D significantly enhances texture consistency and generation quality in animated scenes using the EDM, which produces high-quality full-body textures with detailed eye information using the HUMBI dataset. Additionally, the Pose Tracking and Diffusion Module (PTDM) monitors human motion parameters utilizing the BEAT2 and AMASS mesh-level animatable human model datasets. The EDM, in conjunction with a basic texture seed featuring eyes and the diffusion model, restores high-quality textures, whereas the PTDM, by integrating MoSh++ and SMPL-X body parameters, models hand and body movements from 2D human images, thus providing superior 3D motion capture datasets. This module maintains the synchronization of textures and movements over time to ensure precise animation texture tracking. During training, the ATGT3D model uses the diffusion model as the generative backbone to produce new samples. The EDM improves the texture generation process by enhancing the precision of eye details in texture images. The PTDM involves joint training for pose generation and animation tracking reconstruction. Textures and body movements are generated individually using encoded prompts derived from masked gestures. Furthermore, ATGT3D adaptively integrates texture and animation features using the diffusion model to enhance both fidelity and diversity. Experimental results show that ATGT3D achieves optimal texture generation performance and can flexibly integrate predefined spatiotemporal animation inputs to create comprehensive human animation models. Our experiments yielded unexpectedly positive outcomes. Full article
(This article belongs to the Special Issue AI for Human Collaboration)
Show Figures

Figure 1

Figure 1
<p>This framework facilitates the generation and tracking of animated textures for 3D virtual images.</p>
Full article ">Figure 2
<p>The AMASS dataset is sorted based on the attributes with the most actions (motions) and the least time (minutes). The light blue bars represent subsets of the dataset not utilized in the study, dark blue bars remaining subsets were selected for evaluation and experimentation.</p>
Full article ">Figure 3
<p>Overview of the proposed method for texture recovery estimation from a single image.</p>
Full article ">Figure 4
<p>Complete texture tracking in our method to match 3D human models.</p>
Full article ">Figure 5
<p>Texture generation as well as tracking and matching graphs of textures with modeled action poses.</p>
Full article ">Figure 6
<p>Part (<b>a</b>) depicts the texture map processed using the image diffusion model to recover high-quality texture. Part (<b>b</b>) shows the untrained texture map. Clearly, the clarity of the eye part is superior in image (<b>a</b>) compared to image (<b>b</b>).</p>
Full article ">Figure 7
<p>Examples ofmultiple actions across multiple datasets. From top to bottom: natural human postures of various actions for (<b>a</b>) AMASS jump Model and AMASS jump Texture, (<b>b</b>) AMASS pick-up Model and AMASS pick-up Texture, and (<b>c</b>) EMAGE Model and EMAGE Texture.</p>
Full article ">
24 pages, 9386 KiB  
Article
Toward Improving Human Training by Combining Wearable Full-Body IoT Sensors and Machine Learning
by Nazia Akter, Andreea Molnar and Dimitrios Georgakopoulos
Sensors 2024, 24(22), 7351; https://doi.org/10.3390/s24227351 - 18 Nov 2024
Viewed by 827
Abstract
This paper proposes DigitalUpSkilling, a novel IoT- and AI-based framework for improving and personalising the training of workers who are involved in physical-labour-intensive jobs. DigitalUpSkilling uses wearable IoT sensors to observe how individuals perform work activities. Such sensor observations are continuously processed to [...] Read more.
This paper proposes DigitalUpSkilling, a novel IoT- and AI-based framework for improving and personalising the training of workers who are involved in physical-labour-intensive jobs. DigitalUpSkilling uses wearable IoT sensors to observe how individuals perform work activities. Such sensor observations are continuously processed to synthesise an avatar-like kinematic model for each worker who is being trained, referred to as the worker’s digital twins. The framework incorporates novel work activity recognition using generative adversarial network (GAN) and machine learning (ML) models for recognising the types and sequences of work activities by analysing an individual’s kinematic model. Finally, the development of skill proficiency ML is proposed to evaluate each trainee’s proficiency in work activities and the overall task. To illustrate DigitalUpSkilling from wearable IoT-sensor-driven kinematic models to GAN-ML models for work activity recognition and skill proficiency assessment, the paper presents a comprehensive study on how specific meat processing activities in a real-world work environment can be recognised and assessed. In the study, DigitalUpSkilling achieved 99% accuracy in recognising specific work activities performed by meat workers. The study also presents an evaluation of the proficiency of workers by comparing kinematic data from trainees performing work activities. The proposed DigitalUpSkilling framework lays the foundation for next-generation digital personalised training. Full article
(This article belongs to the Special Issue Wearable and Mobile Sensors and Data Processing—2nd Edition)
Show Figures

Figure 1

Figure 1
<p>DigitalUpSkilling framework.</p>
Full article ">Figure 2
<p>Hybrid GAN-ML activity classification.</p>
Full article ">Figure 3
<p>Skill proficiency assessment.</p>
Full article ">Figure 4
<p>(<b>a</b>) Placement of sensors; (<b>b</b>) sensors and straps; (<b>c</b>) alignment of sensors with the participant’s movements.</p>
Full article ">Figure 5
<p>Work environment for the data collection: (<b>a</b>) boning area; (<b>b</b>) slicing area.</p>
Full article ">Figure 6
<p>Dataflow of the study.</p>
Full article ">Figure 7
<p>(<b>a</b>) Worker performing boning; (<b>b</b>) worker’s real-time digital twin; (<b>c</b>) digital twins showing body movements along with real-time graphs of the joint’s movements.</p>
Full article ">Figure 8
<p>Comparison of the error rates of the different ML models.</p>
Full article ">Figure 9
<p>Confusion matrices: (<b>a</b>) boning; (<b>b</b>) slicing with pitch and roll from right-hand sensors.</p>
Full article ">Figure 10
<p>Distribution of the activity classification: (<b>a</b>) boning; (<b>b</b>) slicing.</p>
Full article ">Figure 11
<p>Accuracy of the GAN for different percentages of synthetic data: (<b>a</b>) boning; (<b>b</b>) slicing.</p>
Full article ">Figure 12
<p>Accuracy of the GAN with different percentages of synthetic data (circled area showing drop in the accuracy): (<b>a</b>) boning; (<b>b</b>) slicing.</p>
Full article ">Figure 13
<p>Classification accuracy with the GAN, SMOTE, and ENN (circled area showing improvement in the accuracy): (<b>a</b>) boning; (<b>b</b>) slicing.</p>
Full article ">Figure 14
<p>Distribution of right-hand pitch and roll mean (in degree).</p>
Full article ">Figure 15
<p>Comparison of the engagement in boning (W1: Worker 1; W2: Worker 2).</p>
Full article ">Figure 16
<p>Comparison of the engagement in slicing.</p>
Full article ">Figure 17
<p>Comparison of the accelerations of the right hand.</p>
Full article ">Figure 18
<p>Comparison of the accelerations of the right-hand.</p>
Full article ">Figure 19
<p>Comparisons of abduction, rotation, and flexion of the right shoulder during boning activities: (<b>a</b>) worker 1; (<b>b</b>) worker 2.</p>
Full article ">
14 pages, 1817 KiB  
Article
A Taxonomy of Embodiment in the AI Era
by Thomas Hellström, Niclas Kaiser and Suna Bensch
Electronics 2024, 13(22), 4441; https://doi.org/10.3390/electronics13224441 - 13 Nov 2024
Viewed by 872
Abstract
This paper presents a taxonomy of agents’ embodiment in physical and virtual environments. It categorizes embodiment based on five entities: the agent being embodied, the possible mediator of the embodiment, the environment in which sensing and acting take place, the degree of body, [...] Read more.
This paper presents a taxonomy of agents’ embodiment in physical and virtual environments. It categorizes embodiment based on five entities: the agent being embodied, the possible mediator of the embodiment, the environment in which sensing and acting take place, the degree of body, and the intertwining of body, mind, and environment. The taxonomy is applied to a wide range of embodiment of humans, artifacts, and programs, including recent technological and scientific innovations related to virtual reality, augmented reality, telepresence, the metaverse, digital twins, and large language models. The presented taxonomy is a powerful tool to analyze, clarify, and compare complex cases of embodiment. For example, it makes the choice between a dualistic and non-dualistic perspective of an agent’s embodiment explicit and clear. The taxonomy also aided us to formulate the term “embodiment by proxy” to denote how seemingly non-embodied agents may affect the world by using humans as “extended arms”. We also introduce the concept “off-line embodiment” to describe large language models’ ability to create an illusion of human perception. Full article
(This article belongs to the Special Issue Metaverse and Digital Twins, 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Examples of various types of human embodiment categorized by our taxonomy.</p>
Full article ">Figure 2
<p>Embodiment of different types of robots and other artifacts according to our taxonomy.</p>
Full article ">Figure 3
<p>Embodiment of different types of computer programs according to our taxonomy [<a href="#B34-electronics-13-04441" class="html-bibr">34</a>,<a href="#B35-electronics-13-04441" class="html-bibr">35</a>].</p>
Full article ">
17 pages, 975 KiB  
Review
Drosophila as a Model for Human Disease: Insights into Rare and Ultra-Rare Diseases
by Sergio Casas-Tintó
Insects 2024, 15(11), 870; https://doi.org/10.3390/insects15110870 - 6 Nov 2024
Viewed by 2088
Abstract
Rare and ultra-rare diseases constitute a significant medical challenge due to their low prevalence and the limited understanding of their origin and underlying mechanisms. These disorders often exhibit phenotypic diversity and molecular complexity that represent a challenge to biomedical research. There are more [...] Read more.
Rare and ultra-rare diseases constitute a significant medical challenge due to their low prevalence and the limited understanding of their origin and underlying mechanisms. These disorders often exhibit phenotypic diversity and molecular complexity that represent a challenge to biomedical research. There are more than 6000 different rare diseases that affect nearly 300 million people worldwide. However, the prevalence of each rare disease is low, and in consequence, the biomedical resources dedicated to each rare disease are limited and insufficient to effectively achieve progress in the research. The use of animal models to investigate the mechanisms underlying pathogenesis has become an invaluable tool. Among the animal models commonly used in research, Drosophila melanogaster has emerged as an efficient and reliable experimental model for investigating a wide range of genetic disorders, and to develop therapeutic strategies for rare and ultra-rare diseases. It offers several advantages as a research model including short life cycle, ease of laboratory maintenance, rapid life cycle, and fully sequenced genome that make it highly suitable for studying genetic disorders. Additionally, there is a high degree of genetic conservation from Drosophila melanogaster to humans, which allows the extrapolation of findings at the molecular and cellular levels. Here, I examine the role of Drosophila melanogaster as a model for studying rare and ultra-rare diseases and highlight its significant contributions and potential to biomedical research. High-throughput next-generation sequencing (NGS) technologies, such as whole-exome sequencing and whole-genome sequencing (WGS), are providing massive amounts of information on the genomic modifications present in rare diseases and common complex traits. The sequencing of exomes or genomes of individuals affected by rare diseases has enabled human geneticists to identify rare variants and identify potential loci associated with novel gene–disease relationships. Despite these advances, the average rare disease patient still experiences significant delay until receiving a diagnosis. Furthermore, the vast majority (95%) of patients with rare conditions lack effective treatment or a cure. This scenario is enhanced by frequent misdiagnoses leading to inadequate support. In consequence, there is an urgent need to develop model organisms to explore the molecular mechanisms underlying these diseases and to establish the genetic origin of these maladies. The aim of this review is to discuss the advantages and limitations of Drosophila melanogaster, hereafter referred as Drosophila, as an experimental model for biomedical research, and the applications to study human disease. The main question to address is whether Drosophila is a valid research model to study human disease, and in particular, rare and ultra-rare diseases. Full article
(This article belongs to the Section Role of Insects in Human Society)
Show Figures

Figure 1

Figure 1
<p>Schematic representation of the genetic alterations that cause rare diseases. Single gene mutations cause genetic aberrations in the DNA sequence of a specific gene. Copy number variation (CNV): the sequences of the genome are repeated. Mitochondrial mutations, the DNA contained in the mitochondria (mtDNA) is mutated. Chromosomal abnormality, the morphology or the number of chromosomes is altered. Polygenic inheritance, more than one gene is mutated. Image generated in <a href="http://BioRender.com" target="_blank">BioRender.com</a> (10 October 2024).</p>
Full article ">Figure 2
<p>CRISPR/Cas9 system used in <span class="html-italic">Drosophila</span> to generate genetic avatars. Representation of CRISPR/Cas9 system to generate mutants in F1 generation; after the cross of parental lines expressing <span class="html-italic">Cas9</span> in the germ line, embryos are injected with a plasmid containing the required tools to induce the excision of a region of DNA (exonuclease activity of Cas9), and the re-insertion of the mutated form of the same piece of DNA (endonuclease activity of Cas9). The resulting combination produces the substitution of endogenous exons by mutated exons that reproduce the mutations found in patients. In addition, the plasmid carries a GFP under the control of a constitutive promoter (Actin) to identify the flies that undergo CRIPSR/Cas9 substitution. This GFP cDNA is flanked by two FRT siter to be excised if required in an additional cross with flies that express <span class="html-italic">flipase</span>. Image generated in <a href="http://BioRender.com" target="_blank">BioRender.com</a> (accessed on 10 October 2024).</p>
Full article ">
13 pages, 3685 KiB  
Article
Study of the Brain Functional Connectivity Processes During Multi-Movement States of the Lower Limbs
by Pengna Wei, Tong Chen, Jinhua Zhang, Jiandong Li, Jun Hong and Lin Zhang
Sensors 2024, 24(21), 7016; https://doi.org/10.3390/s24217016 - 31 Oct 2024
Viewed by 614
Abstract
Studies using source localization results have shown that cortical involvement increased in treadmill walking with brain–computer interface (BCI) control. However, the reorganization of cortical functional connectivity in treadmill walking with BCI control is largely unknown. To investigate this, a public dataset, a mobile [...] Read more.
Studies using source localization results have shown that cortical involvement increased in treadmill walking with brain–computer interface (BCI) control. However, the reorganization of cortical functional connectivity in treadmill walking with BCI control is largely unknown. To investigate this, a public dataset, a mobile brain–body imaging dataset recorded during treadmill walking with a brain–computer interface, was used. The electroencephalography (EEG)-coupling strength of the between-region and within-region during the continuous self-determinant movements of lower limbs were analyzed. The time–frequency cross-mutual information (TFCMI) method was used to calculate the coupling strength. The results showed the frontal–occipital connection increased in the gamma and delta bands (the threshold of the edge was >0.05) during walking with BCI, which may be related to the effective communication when subjects adjust their gaits to control the avatar. In walking with BCI control, the results showed theta oscillation within the left-frontal, which may be related to error processing and decision making. We also found that between-region connectivity was suppressed in walking with and without BCI control compared with in standing states. These findings suggest that walking with BCI may accelerate the rehabilitation process for lower limb stroke. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

Figure 1
<p>Experimental paradigm: (<b>a</b>) EEG channel layout and (<b>b</b>) protocol timeline.</p>
Full article ">Figure 2
<p>EEG-preprocessing steps.</p>
Full article ">Figure 3
<p>The calculation process of TFCMI. (<b>a</b>) The raw EEG obtained from 58 channels was first filtered by a bandpass filter (with 0.1–50Hz passband). (<b>b</b>) The filtered signals of each channel were processed using the Morlet wavelet transformation to obtain time–frequency power maps within the selected frequency band (16–25 Hz). (<b>c</b>) The averaged power signal for each channel was created by averaging the individual time–frequency maps across the selected frequency band. (<b>d</b>) The 58 × 58 TFCMI map was obtained by calculating the TFCMI values from the averaged powers between any two channels. (<b>e</b>) The accumulated coupling strengths can be represented by summing the rows or columns of TFCMI maps and depicted as a 58-channel topographic map.</p>
Full article ">Figure 4
<p>The topographic maps of the averaged accumulated coupling strength of eight subjects: (<b>a</b>) delta, (<b>b</b>) theta, and (<b>c</b>) gamma bands. Aft is the standing after W + BCI; Pre is the standing before W.</p>
Full article ">Figure 5
<p>The accumulated coupling connectivity difference between two states, including the modulation of 10-to-1 connectivity from Pre to W, W to WB, and Pre to Aft: (<b>a</b>) delta band, (<b>b</b>) theta band, (<b>c</b>) gamma band. Aft is the standing after W + BCI; Pre is the standing before W.</p>
Full article ">Figure 6
<p>The statistical analysis of the connectivity network in TFCMI values for the four states: (<b>a</b>) delta, (<b>b</b>) theta, and (<b>c</b>) gamma-band; the green balls represent the within-region connectivity, and the lines are the significant between-region connectivity.</p>
Full article ">
21 pages, 16019 KiB  
Article
Avatar Detection in Metaverse Recordings
by Felix Becker, Patrick Steinert, Stefan Wagenpfeil and Matthias L. Hemmje
Virtual Worlds 2024, 3(4), 459-479; https://doi.org/10.3390/virtualworlds3040025 - 30 Oct 2024
Viewed by 675
Abstract
The metaverse is gradually expanding. There is a growing number of photo and video recordings of metaverse virtual worlds being used in multiple domains, and the collection of these recordings is a rapidly growing field. An essential element of the metaverse and its [...] Read more.
The metaverse is gradually expanding. There is a growing number of photo and video recordings of metaverse virtual worlds being used in multiple domains, and the collection of these recordings is a rapidly growing field. An essential element of the metaverse and its recordings is the concept of avatars. In this paper, we present the novel task of avatar detection in metaverse recordings, supporting semantic retrieval in collections of metaverse recordings and other use cases. Our work addresses the characterizations and definitions of avatars and presents a new model that supports avatar detection. The latest object detection algorithms are trained and tested on a variety of avatar types in metaverse recordings. Our work achieves a significantly higher level of accuracy than existing models, which encourages further research in this field. Full article
Show Figures

Figure 1

Figure 1
<p>Highlighted by the yellow and red squares are two recognised avatars interacting with each other.</p>
Full article ">Figure 2
<p>Samples of the 256 Metaverse Recording dataset.</p>
Full article ">Figure 3
<p>Information UML Class Diagram of <span class="html-italic">Avatars</span>.</p>
Full article ">Figure 4
<p>Example image annotation of avatars in <span class="html-italic">MVRs</span> with <span class="html-italic">LabelImg</span>.</p>
Full article ">Figure 5
<p>Example of detected avatar instances.</p>
Full article ">Figure 6
<p>AP and mAP of <math display="inline"><semantics> <mrow> <mi>Y</mi> <mi>O</mi> <mi>L</mi> <msup> <mi>O</mi> <mrow> <mi>b</mi> <mi>a</mi> <mi>s</mi> <mi>e</mi> </mrow> </msup> </mrow> </semantics></math> on Test Data.</p>
Full article ">Figure 7
<p><math display="inline"><semantics> <mrow> <mi>F</mi> <mn>1</mn> </mrow> </semantics></math> Score for Different Thresholds of <math display="inline"><semantics> <mrow> <mi>Y</mi> <mi>O</mi> <mi>L</mi> <msup> <mi>O</mi> <mrow> <mi>b</mi> <mi>a</mi> <mi>s</mi> <mi>e</mi> </mrow> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>Predicted Avatars <math display="inline"><semantics> <mrow> <mi>Y</mi> <mi>O</mi> <mi>L</mi> <msup> <mi>O</mi> <mrow> <mi>b</mi> <mi>a</mi> <mi>s</mi> <mi>e</mi> </mrow> </msup> </mrow> </semantics></math> on Test Data of <math display="inline"><semantics> <mrow> <mi>A</mi> <mi>D</mi> <mi>E</mi> <mi>T</mi> </mrow> </semantics></math>-<math display="inline"><semantics> <mrow> <mi>D</mi> <mi>S</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 9
<p>AP and mAP of <math display="inline"><semantics> <mrow> <mi>A</mi> <mi>D</mi> <mi>E</mi> <msup> <mi>T</mi> <mrow> <mi>i</mi> <mi>n</mi> <mi>d</mi> <mi>i</mi> <mi>c</mi> <mi>a</mi> <mi>t</mi> <mi>o</mi> <mi>r</mi> </mrow> </msup> </mrow> </semantics></math> on test data of <math display="inline"><semantics> <mrow> <mi>A</mi> <mi>D</mi> <mi>E</mi> <mi>T</mi> </mrow> </semantics></math>-<math display="inline"><semantics> <mrow> <mi>D</mi> <mi>S</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 10
<p><math display="inline"><semantics> <mrow> <mi>F</mi> <mn>1</mn> </mrow> </semantics></math> Score for Different Thresholds of <math display="inline"><semantics> <mrow> <mi>A</mi> <mi>D</mi> <mi>E</mi> <msup> <mi>T</mi> <mrow> <mi>i</mi> <mi>n</mi> <mi>d</mi> <mi>i</mi> <mi>c</mi> <mi>a</mi> <mi>t</mi> <mi>o</mi> <mi>r</mi> </mrow> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 11
<p>Predicted Avatars <math display="inline"><semantics> <mrow> <mi>A</mi> <mi>D</mi> <mi>E</mi> <msup> <mi>T</mi> <mrow> <mi>i</mi> <mi>n</mi> <mi>d</mi> <mi>i</mi> <mi>c</mi> <mi>a</mi> <mi>t</mi> <mi>o</mi> <mi>r</mi> </mrow> </msup> </mrow> </semantics></math> on Test Data of <math display="inline"><semantics> <mrow> <mi>A</mi> <mi>D</mi> <mi>E</mi> <mi>T</mi> </mrow> </semantics></math>-<math display="inline"><semantics> <mrow> <mi>D</mi> <mi>S</mi> </mrow> </semantics></math>.</p>
Full article ">
Back to TopTop