default search action
9. CMMR 2012: London, UK
- Mitsuko Aramaki, Mathieu Barthet, Richard Kronland-Martinet, Sølvi Ystad:
From Sounds to Music and Emotions - 9th International Symposium, CMMR 2012, London, UK, June 19-22, 2012, Revised Selected Papers. Lecture Notes in Computer Science 7900, Springer 2013, ISBN 978-3-642-41247-9
I - Music Emotion Analysis
- Emery Schubert, Sam Ferguson, Natasha Farrar, David Taylor, Gary E. McPherson:
The Six Emotion-Face Clock as a Tool for Continuously Rating Discrete Emotional Responses to Music. 1-18 - Javier Jaimovich, Niall Coghlan, R. Benjamin Knapp:
Emotion in Motion: A Study of Music and Affective Response. 19-43 - Konstantinos Trochidis, David Sears, Diêu-Ly Trân, Stephen McAdams:
Psychophysiological Measures of Emotional Response to Romantic Orchestral Music and Their Musical and Acoustic Correlates. 44-57
II - 3D Audio and Sound Synthesis
- Martin J. Morrell, Joshua D. Reiss:
Two-Dimensional Hybrid Spatial Audio Systems with User Variable Controls of Sound Source Attributes. 58-81 - Ruimin Hu, Shi Dong, Heng Wang, Maosheng Zhang, Song Wang, Dengshi Li:
Perceptual Characteristic and Compression Research in 3D Audio Technology. 82-98 - Simon Conan, Mitsuko Aramaki, Richard Kronland-Martinet, Sølvi Ystad:
Intuitive Control of Rolling Sound Synthesis. 99-109 - Gilberto Bernardes, Carlos Guedes, Bruce W. Pennycook:
EarGram: An Application for Interactive Exploration of Concatenative Sound Synthesis in Pure Data. 110-129 - Etienne Thoret, Mitsuko Aramaki, Richard Kronland-Martinet, Jean-Luc Velay, Sølvi Ystad:
Reenacting Sensorimotor Features of Drawing Movements from Friction Sounds. 130-153
III - Computer Models of Music Perception and Cognition
- Clara Suied, Angélique Dremeau, Daniel Pressnitzer, Laurent Daudet:
Auditory Sketches: Sparse Representations of Sounds Based on Perceptual Models. 154-170 - Marcelo F. Caetano, Athanasios Mouchtaris, Frans Wiering:
The Role of Time in Music Emotion Recognition: Modeling Musical Emotions from Time-Varying Music Features. 171-196 - Thomas C. Walters, David A. Ross, Richard F. Lyon:
The Intervalgram: An Audio Feature for Large-Scale Cover-Song Recognition. 197-213 - Jason Jiri Musil, Budr Elnusairi, Daniel Müllensiefen:
Perceptual Dimensions of Short Audio Clips and Corresponding Timbre Features. 214-227
IV - Music Emotion Recognition
- Mathieu Barthet, György Fazekas, Mark B. Sandler:
Music Emotion Recognition: From Content- to Context-Based Models. 228-252 - Jens Madsen, Bjørn Sand Jensen, Jan Larsen:
Predictive Modeling of Expressed Emotions in Music Using Pairwise Comparisons. 253-277 - Erik M. Schmidt, Matthew Prockup, Jeffrey J. Scott, Brian Dolhansky, Brandon G. Morton, Youngmoo E. Kim:
Analyzing the Perceptual Salience of Audio Features for Musical Emotion Recognition. 278-300
V - Music Information Retrieval
- Jan Van Balen, Joan Serrà, Martín Haro:
Sample Identification in Hip Hop Music. 301-312 - Lorenzo J. Tardón, Isabel Barbancho:
Music Similarity Evaluation Using the Variogram for MFCC Modelling. 313-332 - Jakob Abeßer:
Automatic String Detection for Bass Guitar and Electric Guitar. 333-352 - Ken O'Hanlon, Hidehisa Nagano, Mark D. Plumbley:
Using Oracle Analysis for Decomposition-Based Automatic Music Transcription. 353-365
VI - Film Soundtrack and Music Recommendation
- Fernando Bravo:
The Influence of Music on the Emotional Interpretation of Visual Contexts - Designing Interactive Multimedia Tools for Psychological Research. 366-377 - Sonia Wilkie, Tony Stockman:
The Perception of Auditory-Visual Looming in Film. 378-386
VII - Computational Musicology and Music Education
- Dan Stowell, Elaine Chew:
Maximum a Posteriori Estimation of Piecewise Arcs in Tempo Time-Series. 387-399 - Satoshi Tojo, Keiji Hirata:
Structural Similarity Based on Time-Span Tree. 400-421 - Mathieu Giraud, Richard Groult, Florence Levé:
Subject and Counter-Subject Detection for Analysis of the Well-Tempered Clavier Fugues. 422-438 - Arjun Chandra, Kristian Nymoen, Arve Voldsund, Alexander Refsum Jensenius, Kyrre Glette, Jim Tørresen:
Market-Based Control in Interactive Music Environments. 439-458
VIII - Cross-Disciplinary Perspectives on Expressive Performance Workshop
- Regiane Yamaguchi, Fernando Gualda:
(Re)Shaping Musical Gesture: Modelling Voice Balance and Overall Dynamics Contour. 459-468 - Kristoffer Jensen, Søren R. Frimodt-Møller:
Multimodal Analysis of Piano Performances Portraying Different Emotions. 469-479 - John Paul Ito:
Focal Impulses and Expressive Performance. 480-489 - Alexis Kirke, Eduardo Reck Miranda, Slawomir J. Nasuto:
Learning to Make Feelings: Expressive Performance as a Part of a Machine Learning Tool for Sound-Based Emotion Control. 490-499
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.