Fan et al., 2016 - Google Patents
Automatic soundscape affect recognition using a dimensional approachFan et al., 2016
View PDF- Document ID
- 13992857498183796972
- Author
- Fan J
- Thorogood M
- Pasquier P
- Publication year
- Publication venue
- Journal of the Audio Engineering Society
External Links
Snippet
Soundscape affect recognition is essential for sound designers and soundscape composers. Previous work demonstrated the effectiveness of predicting valence and arousal of soundscapes in responses from one expert user. Based on this, we present a method for the …
- 230000037007 arousal 0 abstract description 69
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06Q—DATA PROCESSING SYSTEMS OR METHODS, SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce, e.g. shopping or e-commerce
- G06Q30/02—Marketing, e.g. market research and analysis, surveying, promotions, advertising, buyer profiling, customer management or rewards; Price estimation or determination
- G06Q30/0202—Market predictions or demand forecasting
- G06Q30/0203—Market surveys or market polls
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/30—Information retrieval; Database structures therefor; File system structures therefor
- G06F17/3074—Audio data retrieval
- G06F17/30749—Audio data retrieval using information manually generated or using information not derived from the audio data, e.g. title and artist information, time and location information, usage information, user ratings
- G06F17/30752—Audio data retrieval using information manually generated or using information not derived from the audio data, e.g. title and artist information, time and location information, usage information, user ratings using information manually generated, e.g. tags, keywords, comments, title or artist information, time, location or usage information, user ratings
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06Q—DATA PROCESSING SYSTEMS OR METHODS, SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce, e.g. shopping or e-commerce
- G06Q30/02—Marketing, e.g. market research and analysis, surveying, promotions, advertising, buyer profiling, customer management or rewards; Price estimation or determination
- G06Q30/0241—Advertisement
- G06Q30/0251—Targeted advertisement
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification
- G10L17/26—Recognition of special voice characteristics, e.g. for use in lie detectors; Recognition of animal voices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06Q—DATA PROCESSING SYSTEMS OR METHODS, SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/10—Office automation, e.g. computer aided management of electronic mail or groupware; Time management, e.g. calendars, reminders, meetings or time accounting
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS
- G10H2210/00—Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
- G10H2210/031—Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 specially adapted for particular use for comparison or discrimination
- G10L25/66—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 specially adapted for particular use for comparison or discrimination for extracting parameters related to health condition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06Q—DATA PROCESSING SYSTEMS OR METHODS, SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Systems or methods specially adapted for a specific business sector, e.g. utilities or tourism
- G06Q50/01—Social networking
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/06—Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids
- G10L21/10—Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids transforming into visible information
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/06—Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
- G09B5/065—Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Fan et al. | Automatic soundscape affect recognition using a dimensional approach | |
Klett et al. | The meaning of indeterminacy: Noise music as performance | |
Thorogood et al. | Soundscape audio signal classification and segmentation using listeners perception of background and foreground sound | |
Seetharaman et al. | Audealize: Crowdsourced audio production tools | |
Hoque et al. | Accessible data representation with natural sound | |
Francombe et al. | A model of distraction in an audio-on-audio interference situation with music program material | |
Lin et al. | Research on sound imagery of electric shavers based on kansei engineering and multiple artificial neural networks | |
Liu | An automatic classification method for multiple music genres by integrating emotions and intelligent algorithms | |
Dobrowohl et al. | Timbre preferences in the context of mixing music | |
Liew et al. | A sonification of cross-cultural differences in happiness-related tweets | |
Cunha Sergio et al. | Emotional Video to Audio Transformation Using Deep Recurrent Neural Networks and a Neuro‐Fuzzy System | |
Thorogood et al. | A framework for computer-assisted sound design systems supported by modelling affective and perceptual properties of soundscape | |
Kaarivuo et al. | Exploring emergent soundscape profiles from crowdsourced audio data | |
Pardo et al. | Learning to build natural audio production interfaces | |
Fong et al. | A theory-based explainable deep learning architecture for music emotion | |
Ly et al. | Generating artificial reverberation via genetic algorithms for real-time applications | |
Martens et al. | Relating multilingual semantic scales to a common timbre space | |
Lenzi et al. | Towards a definition of Autographic Sonifications: Listening as an act of knowledge | |
Fan | Advances in soundscape and music emotion recognition | |
Brambilla et al. | Measurements and Techniques in Soundscape Research | |
McGregor et al. | Comparative evaluation of radio and audio logo sound designs | |
Molina | Adaptive music: Automated music composition and distribution | |
Wang et al. | Personalized recommendation based on improved speech recognition algorithm in music e-learning course simulation | |
Barker et al. | Listening tests for sound design: faster optimization through lower-dimensional parameter spaces | |
Pereira et al. | Using neural networks to compute time offsets from musical instruments |