[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

Li et al., 2023 - Google Patents

Real-Time Automotive Engine Sound Simulation with Deep Neural Network

Li et al., 2023

View PDF
Document ID
2490927911887855577
Author
Li H
Wang W
Li M
Publication year
Publication venue
National Conference on Man-Machine Speech Communication

External Links

Snippet

This paper introduces a real-time technique for simulating automotive engine sounds based on revolutions per minute (RPM) and pedal pressure data. We present a hybrid approach combining both sample-based and procedural methods. In the sample-based technique, the …
Continue reading at sites.duke.edu (PDF) (other versions)

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/003Changing voice quality, e.g. pitch or formants
    • G10L21/007Changing voice quality, e.g. pitch or formants characterised by the process used
    • G10L21/013Adapting to target pitch
    • G10L2021/0135Voice conversion or morphing
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS
    • G10H2250/00Aspects of algorithms or signal processing methods without intrinsic musical character, yet specifically adapted for or used in electrophonic musical processing
    • G10H2250/131Mathematical functions for musical analysis, processing, synthesis or composition
    • G10H2250/215Transforms, i.e. mathematical transforms into domains appropriate for musical signal processing, coding or compression
    • G10H2250/235Fourier transform; Discrete Fourier Transform [DFT]; Fast Fourier Transform [FFT]
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signal analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signal, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signal analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signal, using source filter models or psychoacoustic analysis using predictive techniques
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/06Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids
    • G10L21/10Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids transforming into visible information
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 characterised by the type of extracted parameters
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K15/00Acoustics not otherwise provided for
    • G10K15/02Synthesis of acoustic waves
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS
    • G10H2250/00Aspects of algorithms or signal processing methods without intrinsic musical character, yet specifically adapted for or used in electrophonic musical processing
    • G10H2250/471General musical sound synthesis principles, i.e. sound category-independent synthesis methods
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/031Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal

Similar Documents

Publication Publication Date Title
Jagla et al. Sample-based engine noise synthesis using an enhanced pitch-synchronous overlap-and-add method
Schwarz State of the art in sound texture synthesis
EP4099316B1 (en) Speech synthesis method and system
JPH0863197A (en) Method of decoding voice signal
CN112298031B (en) Active sounding method and system for electric automobile based on shift strategy migration
Abeysinghe et al. Data augmentation on convolutional neural networks to classify mechanical noise
CN116110423A (en) Multi-mode audio-visual separation method and system integrating double-channel attention mechanism
Gontier et al. Privacy aware acoustic scene synthesis using deep spectral feature inversion
Li et al. Real-Time Automotive Engine Sound Simulation with Deep Neural Network
Qian et al. Stripe-Transformer: deep stripe feature learning for music source separation
CN112466274B (en) In-vehicle active sounding method and system of electric vehicle
Vinitha George et al. A novel U-Net with dense block for drum signal separation from polyphonic music signal mixture
Natsiou et al. An exploration of the latent space of a convolutional variational autoencoder for the generation of musical instrument tones
CN117351949A (en) Environmental sound identification method based on second-order cyclic neural network
CN114446316B (en) Audio separation method, training method, device and equipment of audio separation model
CN112652315B (en) Automobile engine sound real-time synthesis system and method based on deep learning
Chang et al. Personalized EV Driving Sound Design Based on the Driver's Total Emotion Recognition
Shao et al. Deep semantic learning for acoustic scene classification
Dupré et al. Analysis by synthesis of engine sounds for the design of dynamic auditory feedback of electric vehicles
Pan et al. PVGAN: a pathological voice generation model incorporating a progressive nesting strategy
Kronland-Martinet et al. High-level control of sound synthesis for sonification processes
Reghunath et al. Predominant audio source separation in polyphonic music
Roddy et al. The design of a smart city sonification system using a conceptual blending and musical framework, web audio and deep learning techniques
Gully et al. Articulatory text-to-speech synthesis using the digital waveguide mesh driven by a deep neural network
Akesbi Audio denoising for robust audio fingerprinting