8000 GitHub - infected4098/Wave-U-Mamba: An official documentation of the paper <Wave-U-Mamba: An End-To-End Framework For High-Quality And Efficient Speech Super Resolution>.
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

An official documentation of the paper <Wave-U-Mamba: An End-To-End Framework For High-Quality And Efficient Speech Super Resolution>.

License

Notifications You must be signed in to change notification settings

infected4098/Wave-U-Mamba

Repository files navigation

Wave-U-Mamba (ICASSP 2025)

Yongjoon Lee, Chanwoo Kim

An official documentation of the paper <Wave-U-Mamba: An End-To-End Framework For High-Quality And Efficient Speech Super Resolution>.

Model Architecture
The architecture of DownsampleBlock (Left), Wave-U-Mamba Generator (Middle), and UpsampleBlock (Right)

Abstract
Speech Super-Resolution (SSR) is a task of enhancing low-resolution speech signals by restoring missing high-frequency components. Conventional approaches typically reconstruct log-mel features, followed by a vocoder that generates high-resolution speech in the waveform domain. However, as log-mel features lack phase information, this can result in performance degradation during the reconstruction phase. Motivated by recent advances with Selective State Spaces Models (SSMs), we propose a method, referred to as Wave-U-Mamba that directly performs SSR in time domain. In our comparative study, including models such as WSRGlow, NU-Wave 2, and AudioSR, Wave-U-Mamba demonstrates superior performance, achieving the lowest Log-Spectral Distance (LSD) across various low-resolution sampling rates, ranging from 8 kHz to 24 kHz. Additionally, subjective human evaluations, scored using Mean Opinion Score (MOS) reveal that our method produces SSR with natural and human-like quality. Furthermore, Wave-U-Mamba achieves these results while generating high-resolution speech over nine times faster than baseline models on a single A100 GPU, with parameter sizes less than 2% of those in the baseline models.

Prerequisites

  1. Clone this repository and change the directory.
git clone https://github.com/infected4098/Wave-U-Mamba.git
cd Wave-U-Mamba
  1. Install python requirements. Please check requirements.txt.
pip install -r requirements.txt
  1. Download config file. Please check cfgs.
  2. Download the pretrained model.

Pretrained Model

TBD

Inference from wav file

  1. Please resample the Low-Resolution audio to make sure your Low-Resolution .wav file has sampling rate of 48 kHz.
  2. Please run the following command.
python inference_wav.py --wav_path [Low-Resolution wav path] \
--output_dir [Folder to save the HR audio wav files] \
--checkpoint_file [Downloaded pretrained model file path] \
--cfgs_path [cfgs file path]

Inference Pipelines (TBD)

We provide a colab demo to show how the inference process works in a nutshell. If you are not using CUDA or any relevant device compatible with official implementation of Mamba, you can use alternative codes to implement this.

Demos

You can listen to some of the generated samples here.

Acknowledgements

We especially thank Sungbin Lim for sharing valuable insights and ideas on the draft. We referred to HiFi-GAN, Mamba and many other resources to implement this.

To-do

  1. Provide pipelines for training.
  2. Build an API for interactive inference and utilization.

Citation

If you find this code useful in your research, please consider citing our paper:

@inproceedings{lee2024waveumambaendtoendframeworkhighquality,
  author    = {Yongjoon Lee and Chanwoo Kim},
  title     = {Wave-U-Mamba: An End-To-End Framework For High-Quality And Efficient Speech Super Resolution},
  booktitle = {Proceedings of the 2025 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)},
  year      = {2025},
  address   = {Hyderabad, India},
  month     = {April},
  publisher = {IEEE}
}

About

An official documentation of the paper <Wave-U-Mamba: An End-To-End Framework For High-Quality And Efficient Speech Super Resolution>.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

0