An official documentation of the paper <Wave-U-Mamba: An End-To-End Framework For High-Quality And Efficient Speech Super Resolution>.
Abstract
Speech Super-Resolution (SSR) is a task of enhancing low-resolution speech signals by restoring missing high-frequency components. Conventional approaches typically reconstruct log-mel features, followed by a vocoder that generates high-resolution speech in the waveform domain. However, as log-mel features lack phase information, this can result in performance degradation during the reconstruction phase. Motivated by recent advances with Selective State Spaces Models (SSMs), we propose a method, referred to as Wave-U-Mamba that directly performs SSR in time domain. In our comparative study, including models such as WSRGlow, NU-Wave 2, and AudioSR, Wave-U-Mamba demonstrates superior performance, achieving the lowest Log-Spectral Distance (LSD) across various low-resolution sampling rates, ranging from 8 kHz to 24 kHz. Additionally, subjective human evaluations, scored using Mean Opinion Score (MOS) reveal that our method produces SSR with natural and human-like quality. Furthermore, Wave-U-Mamba achieves these results while generating high-resolution speech over nine times faster than baseline models on a single A100 GPU, with parameter sizes less than 2% of those in the baseline models.
- Clone this repository and change the directory.
git clone https://github.com/infected4098/Wave-U-Mamba.git
cd Wave-U-Mamba
- Install python requirements. Please check requirements.txt.
pip install -r requirements.txt
- Download config file. Please check cfgs.
- Download the pretrained model.
TBD
- Please resample the Low-Resolution audio to make sure your Low-Resolution
.wav
file has sampling rate of 48 kHz. - Please run the following command.
python inference_wav.py --wav_path [Low-Resolution wav path] \
--output_dir [Folder to save the HR audio wav files] \
--checkpoint_file [Downloaded pretrained model file path] \
--cfgs_path [cfgs file path]
We provide a colab demo to show how the inference process works in a nutshell. If you are not using CUDA or any relevant device compatible with official implementation of Mamba, you can use alternative codes to implement this.
You can listen to some of the generated samples here.
We especially thank Sungbin Lim for sharing valuable insights and ideas on the draft. We referred to HiFi-GAN, Mamba and many other resources to implement this.
- Provide pipelines for training.
- Build an API for interactive inference and utilization.
If you find this code useful in your research, please consider citing our paper:
@inproceedings{lee2024waveumambaendtoendframeworkhighquality,
author = {Yongjoon Lee and Chanwoo Kim},
title = {Wave-U-Mamba: An End-To-End Framework For High-Quality And Efficient Speech Super Resolution},
booktitle = {Proceedings of the 2025 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)},
year = {2025},
address = {Hyderabad, India},
month = {April},
publisher = {IEEE}
}