ChadRefuter is an AI-powered Reddit bot that leverages multiple LLM providers (Ollama, OpenAI, Anthropic, HuggingFace) to engage in discussions and debates. Using asynchronous processing and queues, it efficiently handles post scanning, response generation, and comment posting while strictly adhering to Reddit's API rate limits.
- Asynchronous post processing and comment management using
asyncio
- Support for multiple LLM providers with easy switching
- SQLite database for post/comment tracking and persistence
- Comprehensive error handling and graceful shutdown
- Structured logging with both file and console output
- Multiple Provider Support:
- Ollama (default, local deployment)
- OpenAI (GPT-3.5/4)
- Anthropic (Claude)
- HuggingFace (hosted models)
- Configurable Models: Each provider supports custom model selection
- Fallback Handling: Graceful error handling for LLM failures
- Asynchronous Queues:
- Post Queue: Manages newly detected posts
- Processing Queue: Handles LLM-generated responses
- Rate Limiting:
- 120-second delay between comments
- Automatic queue management
- Reddit API compliance
- Concurrent Operations:
- Parallel post scanning
- Async response generation
- Non-blocking comment posting
- Python 3.8+
- PRAW (Reddit API)
asyncio
for concurrent operations- SQLite for persistence
- Multiple LLM provider SDKs
-
Clone the repository:
bash git clone https://github.com/yourusername/chadrefuter.git cd chadrefuter
-
Install dependencies:
bash pip install -r requirements.txt
-
Configure environment: ``bash cp .env.example .env
``
Required environment variables in .env
:
``env
CLIENT_ID=your_client_id CLIENT_SECRET=your_client_secret USERNAME=your_bot_username PASSWORD=your_bot_password USER_AGENT=python:chadrefuter:v1.0 (by /u/your_username)
SUBREDDIT=target_subreddit SCAN_INTERVAL=60 REPLY_SCAN_INTERVAL=300 MAX_CONVERSATIONS=5 POSTS_FETCH_LIMIT=5 POST_CACHE_SIZE=1000
OPENAI_API_KEY=your_openai_key ANTHROPIC_API_KEY=your_anthropic_key HUGGINGFACE_API_KEY=your_hf_key ``
-
Start the bot with default Ollama provider:
bash python src/bot.py
-
Use specific LLM provider:
bash python src/bot.py --llm-provider openai --llm-model gpt-3.5-turbo
Available providers:
ollama
(default, usesllama2:latest
)openai
(default:gpt-3.5-turbo
)anthropic
(default:claude-3-sonnet-20240229
)huggingface
(default:meta-llama/Llama-2-7b-chat-hf
)
- Monitor logs:
bash tail -f logs/reddit_bot_YYYYMMDD.log
-
Post Scanner (
scan_posts
):- Runs every
SCAN_INTERVAL
seconds - Fetches new posts using PRAW
- Adds to
post_queue
- Runs every
-
Queue Processor (
process_queue
):- Continuously monitors
post_queue
- Processes posts through LLM
- Adds responses to
processing_queue
- Continuously monitors
-
Comment Handler (
comment_processor
):- Monitors
processing_queue
- Implements rate limiting
- Posts comments to Reddit
- Monitors
- Posts Table:
post_id
: Reddit post IDsubreddit
: Subreddit nametitle
: Post titlepost_text
: Post contentauthor
: Post authortimestamp
: Post creation timellm_response
: Generated responseresponse_timestamp
: Response time
- Structured logging with thread-safe handlers
- Separate console and file outputs
- Detailed debug information for LLM interactions
- Performance metrics and error tracking
MIT License - see LICENSE
for details