2025.05.07
🌟 We are proud to launch VITA-Audio, an end-to-end large speech model with fast audio-text token generation.
- Low Latency. VITA-Audio is the first end-to-end speech model capable of generating audio during the initial forward pass. By utilizing a set of 32 prefill tokens, VITA-Audio reduces the time required to generate the first audio token chunk from 236 ms to just 53 ms.
- Fast Inference. VITA-Audio achieves an inference speedup of 3-5x at the 7B parameter scale.
- Open Source. VITA-Audio is trained on open-source data only, consisting of 200k hours of publicly available audio.
- Strong Performance. VITA-Audio achieves competitive results on ASR,TTS and SQA benchmarks among cutting-edge models under 7B parameters.
Model inference speed under different inference modes.
打南边来了个哑巴,腰里别了个喇叭;打北边来了个喇嘛,手里提了个獭犸。
提着獭犸的喇嘛要拿獭犸换别着喇叭的哑巴的喇叭;别着喇叭的哑巴不愿拿喇叭换提着獭玛的喇嘛的獭犸。
不知是别着喇叭的哑巴打了提着獭玛的喇嘛一喇叭;还是提着獭玛的喇嘛打了别着喇叭的哑巴一獭玛。
喇嘛回家炖獭犸;哑巴嘀嘀哒哒吹喇叭。
audio_1.mov
To be or not to be--to live intensely and richly, merely to exist, that depends on ourselves. Let widen and intensify our relations.
While we live, let live!
audio_2.mov
The hair has been so little, don't think about it, go to bed early, for your hair. Good night!
audio_3.mov
两个黄鹂鸣翠柳, 一行白鹭上青天。
窗含西岭千秋雪, 门泊东吴万里船。
audio_4.mov
- Release training code and inference code.
- Release checkpoints.
- Release VITA-Audio-Plus.
- Release the cleaned open-source data JSON and audio.
Model | LLM Size | Huggingface Weights |
---|---|---|
VITA-Audio-Boost | 7B | https://huggingface.co/VITA-MLLM/VITA-Audio-Boost |
VITA-Audio-Balance | 7B | https://huggingface.co/VITA-MLLM/VITA-Audio-Balance |
VITA-Audio-Plus-Vanilla | 7B | https://huggingface.co/VITA-MLLM/VITA-Audio-Plus-Vanilla |
- Comparison of Spoken Question Answering.
- Comparison of Text to Speech.
- Comparison of Automatic Speech Recognition.
- Effectiveness of Inference Acceleration.
docker pull shenyunhang/pytorch:24.11-py3_2024-1224
git clone https://github.com/VITA-MLLM/VITA-Audio.git
cd VITA-Audio
git submodule update --init --recursive
pip install -r requirements_ds_gpu.txt
pip install -e .
- Download the LLM from https://huggingface.co/Qwen/Qwen2.5-7B-Instruct.
- Put it into '../models/Qwen/Qwen2.5-7B-Instruct/'
-
Download the Audio Encoder from https://huggingface.co/THUDM/glm-4-voice-tokenizer.
-
Put it into '../models/THUDM/glm-4-voice-tokenizer'
-
Download the Audio Decoder from https://huggingface.co/THUDM/glm-4-voice-decoder.
-
Put it into '../models/THUDM/glm-4-voice-decoder'
{
"messages": [
{
"content": "Convert the speech to text.\n<|audio|>",
"role": "user"
},
{
"content": "没有跟大家说是在做什么",
"role": "assistant"
}
],
"audios": [
"datasets/wenet-e2e/wenetspeech/data/cuts_L_fixed.00000000/X00/X0000016296_135343932_S00019.wav"
]
}
{
"messages": [
{
"content": "Convert the text to speech.\n那我情愿无药可救。",
"role": "user"
},
{
"content": "<|audio|>",
"role": "assistant"
}
],
"audios": [
"datasets/Wenetspeech4TTS/WenetSpeech4TTS/Premium/WenetSpeech4TTS_Premium_9/wavs/X0000001735_50639692_S00035.wav"
]
}
The following tutorial will take VITA-Audio-Boost
as an example.
-
To train
VITA-Audio-Balance
and other variants, you should modify thetext-audio-interval-ratio
.VITA-Audio-Boost:
--text-audio-interval-ratio 1 10 4 10 \
VITA-Audio-Balance:
--text-audio-interval-ratio 1 4 3 8 4 10 \
-
To train
VITA-Audio-Plus-*
, you should use the script likescripts/deepspeed/sts_qwen25/finetune_sensevoice_glm4voice...
bash scripts/deepspeed/sts_qwen25/finetune_glm4voice_stage1.sh 8192 `date +'%Y%m%d_%H%M%S'`
The above script may need some adjustments.
- Set
ROOT_PATH
to your code root folder. - Set
LOCAL_ROOT_PATH
to a temporary code root folder. - Modify other variables as needed for your environment.
bash scripts/deepspeed/sts_qwen25/finetune_glm4voice_mtp1_stage1.sh 8192 `date +'%Y%m%d_%H%M%S'`
The above script may need some adjustments.
- Set
ROOT_PATH
to your code root folder. - Set
LOCAL_ROOT_PATH
to a temporary code root folder. - Set
MODEL_NAME_OR_PATH
to the path of the model trained in Stage 1. - Modify other variables as needed for your environment.
bash scripts/deepspeed/sts_qwen25/finetune_glm4voice_mtp10_stage1.sh 8192 `date +'%Y%m%d_%H%M%S'`
The above script may need some adjustments.
- Set
ROOT_PATH
to your code root folder. - Set
LOCAL_ROOT_PATH
to a temporary code root folder. - Set
MODEL_NAME_OR_PATH
to the path of the model trained in Stage 2. - Modify other variables as needed for your environment.
bash scripts/deepspeed/sts_qwen25/finetune_glm4voice_mtp10_stage2.sh 2048 `date +'%Y%m%d_%H%M%S'`
The above script may need some adjustments.
- Set
ROOT_PATH
to your code root folder. - Set
LOCAL_ROOT_PATH
to a temporary code root folder. - Set
MODEL_NAME_OR_PATH
to the path of the model trained in Stage 3. - Modify other variables as needed for your environment.
Here we implement a simple script for inference.
It includes examples of speech-to-speech, ASR, and TTS tasks, as well as streaming and non-streaming inference speed testing.
python tools/inference_sts.py
- Set
model_name_or_path
to VITA-Audio weights. - Set
audio_tokenizer_path
to the path of the audio encoder. - Set
flow_path
to the path of the audio decoder.
Evaluate SQA, ASR, and TTS benchmarks
bash scripts/deepspeed/evaluate_sts.sh
VITA-Audio is trained on large-scale open-source corpus, and its output has randomness. Any content generated by VITA-Audio does not represent the views of the model developers. We are not responsible for any problems arising from the use, misuse, and dissemination of VITA-Audio, including but not limited to public opinion risks and data security issues.
If you find our work helpful for your research, please consider citing the following BibTeX entry.
@misc{,
title={VITA-Audio: Fast Interleaved Cross-Modal Token Generation for Efficient Large Speech-Language Model},
author={Zuwei Long and Yunhang Shen and Chaoyou Fu and Heting Gao and Lijiang Li and Peixian Chen and Mengdan Zhang and Hang Shao and Jian Li and Jinlong Peng and Haoyu Cao and Ke Li and Rongrong Ji and Xing Sun},
year={2025},
eprint={2505.03739},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.03739},
}