This repository provides a simple yet powerful example of building a conversational chatbot with real-time web access, leveraging Tavily's advanced search capabilities. It intelligently routes queries between its base knowledge and live Tavily web searches, ensuring accurate, up-to-date responses.
Designed for ease of customization, you can extend this core implementation to:
- Integrate proprietary data
- Modify the chatbot architecture
- Modify LLMs
Tavily empowers developers to easily create custom chatbots and agents that seamlessly interact with web content.
- 🔍 Intelligent question routing between base knowledge and web search
- 🧠 Conversational memory with LangGraph
- 🌐 Real-time web search capabilities powered by Tavily
- 🚀 FastAPI backend with async support
- 🔄 Streaming of Agentic Substeps
- 💬 Markdown support in chat responses
- 🔗 Citations for web search results
This repository includes everything required to create a functional chatbot with web access:
📡 Backend (backend/
)
The core backend logic, powered by LangGraph:
chatbot.py
– Defines the chatbot architecture, state management, and processing nodes.prompts.py
– Contains customizable prompt templates.utils.py
– Utilities for parsing and managing conversation messages.
🌐 Frontend (ui/
)
Interactive React frontend for dynamic user interactions and chatbot responses.
app.py
– FastAPI server that handles API endpoints and streaming responses.
a. Create a .env
file in the root directory with:
TAVILY_API_KEY="your-tavily-api-key"
OPENAI_API_KEY="your-openai-api-key"
VITE_APP_URL=http://localhost:5173
b. Create a .env
file in the ui
directory with:
VITE_BACKEND_URL=http://localhost:8080
- Create a virtual environment and activate it:
python3 -m venv venv
source venv/bin/activate # On Windows: .\venv\Scripts\activate
- Install dependencies:
python3 -m pip install -r requirements.txt
- From the ro 6F08 ot of the project, run the backend server:
python app.py
- Alternatively, build and run the backend using Docker from the root of the project:
# Build the Docker image
docker build -t chat-tavily .
# Run the container
docker run -p 8080:8080 --env-file .env chat-tavily
- Navigate to the frontend directory:
cd ui
- Install dependencies:
npm install
- Start the development server:
npm run dev
POST /stream_agent
: Chat endpoint that handles streamed LangGraph execution
Feel free to submit issues and enhancement requests!
Have questions, feedback, or looking to build something custom? We'd love to hear from you!
- Email our team directly:
Powered by Tavily - The web API Built for AI Agents