Yeung et al., 2023 - Google Patents
Gender biases in tone analysis: a case study of a commercial wearableYeung et al., 2023
View PDF- Document ID
- 6622916715826128606
- Author
- Yeung C
- Iqbal U
- Kohno T
- Roesner F
- Publication year
- Publication venue
- Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization
External Links
Snippet
In addition to being a health and fitness band, the Amazon Halo offers users information about how their voices sound, ie, their 'tones'. The Halo's tone analysis capability leverages machine learning, which can lead to potentially biased inferences. We develop an auditing …
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/18—Speech classification or search using natural language modelling
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 specially adapted for particular use for comparison or discrimination
- G10L25/66—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 specially adapted for particular use for comparison or discrimination for extracting parameters related to health condition
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification
- G10L17/26—Recognition of special voice characteristics, e.g. for use in lie detectors; Recognition of animal voices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/20—Handling natural language data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06Q—DATA PROCESSING SYSTEMS OR METHODS, SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/10—Office automation, e.g. computer aided management of electronic mail or groupware; Time management, e.g. calendars, reminders, meetings or time accounting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computer systems utilising knowledge based models
- G06N5/02—Knowledge representation
- G06N5/022—Knowledge engineering, knowledge acquisition
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B19/00—Teaching not covered by other main groups of this subclass
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Sultana et al. | SUST Bangla Emotional Speech Corpus (SUBESCO): An audio-only emotional speech corpus for Bangla | |
Shriberg et al. | A nonword repetition task for speakers with misarticulations: The Syllable Repetition Task (SRT) | |
Sauter et al. | Perceptual cues in nonverbal vocal expressions of emotion | |
JP2022553749A (en) | Acoustic and Natural Language Processing Models for Velocity-Based Screening and Behavioral Health Monitoring | |
Pugh et al. | Say what? Automatic modeling of collaborative problem solving skills from student speech in the wild | |
Johar | Emotion, affect and personality in speech: The Bias of language and paralanguage | |
Nasreen et al. | Alzheimer’s dementia recognition from spontaneous speech using disfluency and interactional features | |
US20230048098A1 (en) | Apparatus and method for speech-emotion recognition with quantified emotional states | |
Xu et al. | Assessing L2 English speaking using automated scoring technology: examining automarker reliability | |
Cheng et al. | Immediate auditory repetition of words and nonwords: an ERP study of lexical and sublexical processing | |
Solomon et al. | Objective methods for reliable detection of concealed depression | |
AU2021314026A1 (en) | Self-adapting and autonomous methods for analysis of textual and verbal communication | |
Milling et al. | Evaluating the impact of voice activity detection on speech emotion recognition for autistic children | |
Schirmer et al. | Angry, old, male–and trustworthy? How expressive and person voice characteristics shape listener trust | |
Law et al. | Automatic voice emotion recognition of child-parent conversations in natural settings | |
Cohen et al. | A multimodal dialog approach to mental state characterization in clinically depressed, anxious, and suicidal populations | |
Hosain et al. | Emobone: A multinational audio dataset of emotional bone conducted speech | |
Koti et al. | Speech Emotion Recognition using Extreme Machine Learning. | |
Lubold et al. | Do conversational partners entrain on articulatory precision? | |
Alghowinem et al. | Beyond the words: analysis and detection of self-disclosure behavior during robot positive psychology interaction | |
Kalanadhabhatta et al. | Playlogue: Dataset and benchmarks for analyzing adult-child conversations during play | |
Yeung et al. | Gender biases in tone analysis: a case study of a commercial wearable | |
Iyer et al. | Relationships between vocalization forms and functions in infancy: preliminary implications for early communicative assessment and intervention | |
Correia et al. | Detecting psychological distress in adults through transcriptions of clinical interviews | |
Gershov et al. | Automating medical simulations |