The goal of this project is to demo speech <-> langchain <-> audio
workflow.
- Speech to text is using OpenAI's open source Whisper mini model.
- Chat model used for this demo is Microsoft's Phi3 model running locally using Ollama.
- Text to Audio is using Suno's open source Bark small model.
For interesting projects and related resources, checkout the Awesome Projects Page.
Unmute the audio to hear responses