ConText is a desktop app for secure, local text translation using Ollama LLMs. It supports language detection, text-to-speech, web scraping, text summarization, and YouTube transcript extraction, all processed locally.
Feature | Description |
---|---|
Local translation | Process translations using Ollama models locally |
Language detection | Automatically identify the language of input text |
Text-to-speech | Convert text to WAV audio files |
Web content scraping | Extract text from websites for translation |
Text summarization | Create concise summaries of longer texts |
YouTube transcript retrieval | Get transcripts from YouTube videos |
Multiple language support | Translate between various languages |
Configurable text chunking | Customize text processing parameters |
- Python 3.8+
- Ollama installed and running
- Clone the repository:
git clone https://github.com/KazKozDev/ConText.git cd ConText
- Set up the backend:
cd backend
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txt
- Start Ollama:
ollama serve
- Start the backend:
cd backend
source venv/bin/activate # On Windows: venv\Scripts\activate
python app.py
Backend runs on http://localhost:5002.
Endpoint | Function | Description |
---|---|---|
/health | Server status | Check if the server is running properly |
/translate | Translate text | Convert text between languages |
/detect-language | Detect language | Identify the language of input text |
/tts | Text-to-speech | Convert text to audio format |
/scrape-url | Scrape web content | Extract text from web pages |
/summarize | Summarize text | Create concise summaries of texts |
/youtube-transcript | YouTube transcript | Extract transcripts from YouTube videos |
Translate English to Spanish:
curl -X POST http://localhost:5002/translate \
-H "Content-Type: application/json" \
-d '{"text": "Hello, world!", "source_lang": "en", "target_lang": "es"}'
Response:
{"translated_text": "¡Hola, mundo!"}
If you like this project, please give it a star ⭐
For questions, feedback, or support, reach out to: