[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

Info: Zenodo’s user support line is staffed on regular business days between Dec 23 and Jan 5. Response times may be slightly longer than normal.

Published June 1, 2017 | Version v1
Conference paper Open

A Knowledge-based, Data-driven Method for Action-sound Mapping

Description

This paper presents a knowledge-based, data-driven method for using data describing action-sound couplings collected from a group of people to generate multiple complex mappings between the performance movements of a musician and sound synthesis. This is done by using a database of multimodal motion data collected from multiple subjects coupled with sound synthesis parameters. A series of sound stimuli is synthesised using the sound engine that will be used in performance. Multimodal motion data is collected by asking each participant to listen to each sound stimulus and move as if they were producing the sound using a musical instrument they are given. Multimodal data is recorded during each performance, and paired with the synthesis parameters used for generating the sound stimulus. The dataset created using this method is then used to build a topological representation of the performance movements of the subjects. This representation is then used to interactively generate training data for machine learning algorithms, and define mappings for real-time performance. To better illustrate each step of the procedure, we describe an implementation involving clarinet, motion capture, wearable sensor armbands, and waveguide synthesis.

Files

nime2017_paper0043.pdf

Files (1.1 MB)

Name Size Download all
md5:46de493c8287d447e63877b7516f2f76
1.1 MB Preview Download