A modern, responsive AI chat interface with integrated web search functionality. Code Infinity AI provides a clean UI similar to Perplexity.ai, combining conversational AI with real-time search capabilities
- Real-time AI Responses - Stream AI responses as they're generated
- Integrated Web Search - AI can search the web for up-to-date information
- Conversation Memory - Maintains context throughout your conversation
- Search Process Transparency - Visual indicators show searching, reading, and writing stages
- Responsive Design - Clean, modern UI that works across devices
Code Infinity AI follows a client-server architecture:
- Modern React application built with Next.js
- Real-time streaming updates using Server-Sent Events (SSE)
- Components for message display, search status, and input handling
- Python backend using FastAPI for API endpoints
- LangGraph implementation for conversation flow with LLM and tools
- Integration with Tavily Search API for web searching capabilities
- Server-Sent Events for real-time streaming of AI responses
- Node.js 18+
- Python 3.10+
- OpenAI API key
- Tavily API key
-
Clone the repository
git clone cd code_infinity_ai
-
Set up the server
cd server python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate pip install -r requirements.txt
-
Configure environment variables
Create a.env
file in the server directory: OPENAI_API_KEY=your_openai_api_key TAVILY_API_KEY=your_tavily_api_key -
Set up the client
cd ../client npm install
-
Build and run with Docker Compose
docker-compose up --build
This will build and start both the client and server containers.
-
Build containers separately
For server:
cd server docker build -t code-infinity-server . docker run -p 8000:8000 code-infinity-server
For client:
cd client docker build -t code-infinity-client . docker run -p 3000:3000 code-infinity-client
-
Environment Variables with Docker
- Create a
.env
file in the root directory - Docker Compose will automatically use these variables
- For manual container runs, use:
docker run --env-file .env -p 8000:8000 code-infinity-server
- Create a
-
Start the server
cd server uvicorn app:app --reload
-
Start the client
cd client npm run dev
-
Open your browser and navigate to http://localhost:3000
- User sends a message through the chat interface
- Server processes the message using GPT-4o
- AI decides whether to use search or respond directly
- If search is needed:
- Search query is sent to Tavily API
- Results are processed and provided back to the AI
- AI uses this information to formulate a response
- Response is streamed back to the client in real-time
- Search stages are displayed to the user (searching, reading, writing)
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add some amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- Built with Next.js, React, FastAPI, and LangGraph
- Powered by OpenAI GPT-4o and Tavily Search API