default search action
14th Audio Mostly Conference 2019: Nottingham, UK
- Proceedings of the 14th International Audio Mostly Conference: A Journey in Sound, Nottingham, UK, September 18-20, 2019. ACM 2019, ISBN 978-1-4503-7297-8
Full Papers
- Marian Weger, Robert Höldrich:
A hear-through system for plausible auditory contrast enhancement. 1-8 - Nathan Renney, Benedict R. Gaster:
Digital Expression and Representation of Rhythm. 9-16 - Alejandro Delgado, SKoT McDonald, Ning Xu, Mark B. Sandler:
A New Dataset for Amateur Vocal Percussion Analysis. 17-23 - Filippo Carnovalini, Antonio Rodà:
A Real-Time Tempo and Meter Tracking System for Rhythmic Improvisation. 24-31 - Adrian Hazzard, Chris Greenhalgh:
Adaptive Musical Soundtracks: from in-game to on the street. 31-38 - Keisuke Shiro, Ryotaro Miura, Changyo Han, Jun Rekimoto:
An Intuitive Interface for Digital Synthesizer by Pseudo-intention Learning. 39-44 - Katja Rogers, Michael Weber:
Audio Habits and Motivations in Video Game Players. 45-52 - Michael Iber, Patrik Lechner, Christian Jandl, Manuel Mader, Michael Reichmann:
Auditory Augmented Reality for Cyber Physical Production Systems. 53-60 - Ronan O'Dea, Rokaia Jedir, Flaithrí Neff:
Auditory Distraction in HCI: Towards a Framework for the Design of Hierarchically-Graded Auditory Notifications. 61-66 - Juan Pablo Martinez-Avila, Adrian Hazzard, Chris Greenhalgh, Steve Benford:
Augmenting Guitars for Performance Preparation. 69-75 - Dirk Vander Wilt, Morwaread Mary Farbood:
Automating Audio Description for Live Theater: Using Reference Recordings to Trigger Descriptive Tracks in Real Time. 75-81 - Jack Armitage, Andrew P. McPherson:
Bricolage in a hybrid digital lutherie context: a workshop study. 82-89 - Michael Urbanek, Florian Güldenpfennig:
Celebrating 20 Years of Computer-based Audio Gaming. 90-97 - Lars Engeln, Rainer Groh:
CoHEARence: a qualitive User-(Pre-)Test on Resynthesized Shapes for coherent visual Sound Design. 98-102 - Thomas J. Graham, Thor Magnusson, Chinmay Rajguru, Arash Pour Yazdan, Alex Jacobs, Gianluca Memoli:
Composing spatial soundscapes using acoustic metasurfaces. 103-110 - Annaliese Micallef Grimaud, Tuomas Eerola, Nick Collins:
EmoteControl: A System for Live-Manipulation of Emotional Cues in Music. 111-115 - Stuart Cunningham, Harrison Ridley, Jonathan Weinel, Richard Picking:
Audio Emotion Recognition using Machine Learning to support Sound Design. 116-123 - Emma Young, Alan Marsden, Paul Coulton:
Making the Invisible Audible: Sonifying Qualitative Data. 124-130 - Fred Bruford, Mathieu Barthet, SKoT McDonald, Mark B. Sandler:
Modelling Musical Similarity for Drum Patterns: A Perceptual Evaluation. 131-138 - Elio Toppano, Sveva Toppano, Alessandro Basiaco:
Moving across Sonic Atmospheres. 139-146 - David L. Page:
Music & Sound-tracks of our everyday lives: Music & Sound-making, Meaning-making, Self-making. 147-153 - Iain Emsley, David De Roure, Pip Willcox, Alan Chamberlain:
Performing Shakespeare: From Symbolic Notation to Sonification. 154-159 - Michael Krzyzaniak, David M. Frohlich, Philip J. B. Jackson:
Six types of audio that DEFY reality!: A taxonomy of audio augmented reality with examples. 160-167 - Luca Turchet, Travis J. West, Marcelo M. Wanderley:
Smart Mandolin and Musical Haptic Gilet: effects of vibro-tactile stimuli during live music performance. 168-175 - Laurence Cliffe, James Mansell, Joanne Cormac, Chris Greenhalgh, Adrian Hazzard:
The Audible Artefact: Promoting Cultural Exploration and Engagement with Audio Augmented Reality. 176-182 - Gary Bromham, David Moffat, Mathieu Barthet, Anne Danielsen, György Fazekas:
The Impact of Audio Effects Processing on the Perception of Brightness and Warmth. 183-190 - Inês Salselas, Rui Penha:
The role of sound in inducing storytelling in immersive environments. 191-198 - Feng Su, Chris Joslin:
Toward Generating Realistic Sounds for Soft Bodies: A Review. 199-206 - Neil McGuiness, Chris Nash:
The Pulse: Embedded Beat Sensing Using Physical Data. 207-214
Short Papers
- Darrell Gibson, Richard Polfreman:
A Journey in (Interpolated) Sound: Impact of Different Visualizations in Graphical Interpolators. 215-218 - Andrew Thompson, György Fazekas:
A Model-View-Update Framework for Interactive Web Audio Applications. 219-222 - Etienne Richan, Jean Rouat:
A study comparing shape, colour and texture as visual labels in audio sample browsers. 223-226 - Adriano Baratè, Luca A. Ludovico, Davide A. Mauro:
A Web Prototype to Teach Music and Computational Thinking Through Building Blocks. 227-230 - Yesid Ospitia Medina, José Ramón Beltrán, Cecilia Veronica Sanz, Sandra Baldassarri:
Dimensional Emotion Prediction through Low-Level Musical Features. 231-234 - Jeevan Singh Nayal, Abhishek Joshi, Bijendra Kumar:
Emotion Recognition in Songs via Bayesian Deep Learning. 235-238 - Dalia Senvaityte, Johan Pauwels, Mark B. Sandler:
Guitar String Separation Using Non-Negative Matrix Factorization and Factor Deconvolution. 239-243 - Luca Turchet, Mathieu Barthet:
Haptification of performer's control gestures in live electronic music performance. 244-247 - Stine S. Lundgaard, Peter Axel Nielsen, Jesper Kjeldskov:
Interaction Design for Domestic Sound Zones. 248-251 - Luca Turchet:
Interactive sonification and the IoT: the case of smart sonic shoes for clinical applications. 252-255 - David Alexander, Jack Armitage:
LiveCore: Increasing Liveness in a Low-Level Dataflow Programming Environment. 256-259 - Sara Nielsen, Lars Bo Larsen, Kashmiri Stec, Adèle Simon:
Mental Models of Loudspeaker Directivity. 260-263 - Signe Lund Mathiesen, Derek Victor Byrne, Qian Janice Wang:
Sonic Mug: A Sonic Seasoning System. 264-267 - Trevor Hunter, Peter Worthy, Ben Matthews, Stephen Viller:
Using Participatory Design in the Development of a New Musical Interface: Understanding Musician's Needs beyond Usability. 268-271
Extended Abstracts
- Michael Urbanek, Florian Güldenpfennig, Michael Habiger:
Creating Audio Games Online with a Browser-Based Editor. 272-276 - Jonathan Weinel:
Cyberdream VR: Visualizing Rave Music and Vaporwave in Virtual Reality. 277-281 - Luke Skarth-Hayley, Julie Greensmith:
Demonstrating Customisation Markup in the Siren Songs Sonification System. 282-286 - Daniel Mayer:
PbindFx: an interface for sequencing effect graphs in the SuperCollider audio programming language. 287-291 - Trevor Hunter, Peter Worthy, Ben Matthews, Stephen Viller:
Soundscape: Participatory Design of an Interface for Musical Expression. 292-296 - Laura Boffi:
The First Experience Prototype of The Storytellers Project. 297-301
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.