%0 Conference Proceedings %T fairseq S\²: A Scalable and Integrable Speech Synthesis Toolkit %A Wang, Changhan %A Hsu, Wei-Ning %A Adi, Yossi %A Polyak, Adam %A Lee, Ann %A Chen, Peng-Jen %A Gu, Jiatao %A Pino, Juan %Y Adel, Heike %Y Shi, Shuming %S Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations %D 2021 %8 November %I Association for Computational Linguistics %C Online and Punta Cana, Dominican Republic %F wang-etal-2021-fairseq %X This paper presents fairseq S\², a fairseq extension for speech synthesis. We implement a number of autoregressive (AR) and non-AR text-to-speech models, and their multi-speaker variants. To enable training speech synthesis models with less curated data, a number of preprocessing tools are built and their importance is shown empirically. To facilitate faster iteration of development and analysis, a suite of automatic metrics is included. Apart from the features added specifically for this extension, fairseq S\² also benefits from the scalability offered by fairseq and can be easily integrated with other state-of-the-art systems provided in this framework. The code, documentation, and pre-trained models will be made available at https://github.com/pytorch/fairseq/tree/master/examples/speech_synthesis. %R 10.18653/v1/2021.emnlp-demo.17 %U https://aclanthology.org/2021.emnlp-demo.17 %U https://doi.org/10.18653/v1/2021.emnlp-demo.17 %P 143-152