Jarvis is a powerful custom agent platform with Telegram integration, designed to streamline research tasks using LangGraph Agents.
You can start using Jarvis in two ways:
-
Direct Telegram Search:
Search for@jarvyjarvisbot
in Telegram -
Quick Link:
Launch Jarvis on Telegram →
Basic usage of the bot
Bot will ask clarifying questions if needed
Research agent running in LangGraph Studio
Visualization of the research agent structure (with output)
Visualization of the chat agent structure
Similar to Perplexity.ai, Jarvis performs real-time web searches to answer your questions. However, there are some key differences:
- Clarification First: Unlike Perplexity, Jarvis starts by asking clarifying questions when needed to ensure accurate research
- Multi-threaded Research: Parallel web searches and scraping is conducted to gather comprehensive information from multiple sources
- Review Process: A dedicated reviewer agent evaluates the findings and may initiate additional research if needed
- User submits a question via Telegram
- Chat agent evaluates clarity and asks follow-up questions if needed
- Research agent conducts parallel web searches
- Reviewer evaluates findings and either:
- Requests additional research
- Returns final answer to user
- Telegram bot integration
- User management system
- MongoDB integration for message history / cancellation
- Support for multiple LLM providers (Claude, OpenAI, Groq, Ollama)
- We recommend using Ollama + llama3.2 for local development
- This bot was built for research and not production
- There is a rate limit of 8 research calls on the published bot
- The bot is not optimized for scaling or performance, it is intended to be run locally
- If this was to be deployed we would recommend splitting the agent from the telegram service
- The webhook is not secure and other components would benefit from some refactoring
Before you begin, ensure you have the following requirements:
- Python 3.9 or higher
- Docker and Docker Compose
- MongoDB (local or Atlas)
- Telegram Bot Token
- Serper API Key
-
Clone the repository:
git clone https://github.com/Encodex-Ai/agent-suite.git cd agent-suite
-
Configure environment:
cp .env.template .env
Edit
.env
with your:- MongoDB connection string (Get MongoDB Atlas)
- Telegram bot token (Create with BotFather)
- Serper API key (Get from Serper)
-
Install dependencies:
cd backend poetry install
To run the application in development mode:
-
Start the MongoDB service (if not using a cloud-hosted instance). We recommend MongoDB Atlas in this example, and you can create a free cluster at MongoDB Atlas. After you create your cluster, you can get the connection string from the "Connect" section of your cluster's page.
-
Register your Telegram bot with BotFather and get the token: https://core.telegram.org/bots#botfather
-
Set the
TELEGRAM_TOKEN
in your.env
file with your bot's token -
Use ngrok to create a tunnel to your local server. You can register and download it from Ngrok. Then run the following command to start the tunnel:
ngrok http 8080
-
Copy the HTTPS URL provided by ngrok (e.g.,
https://1234-abcd-efgh.ngrok.io
) and set theCLOUD_RUN_SERVICE_URL
in your.env
file with this URL -
Run the application with docker compose:
docker compose up --build
-
The API will be available at
http://localhost:8080
-
To set the webhook for your Telegram bot manually, navigate to
http://localhost:8080/set_webhook
For development without cloud LLMs:
-
Install and start Ollama:
# Install from https://ollama.com/docs/installation ollama pull llama3.2 ollama serve
-
Configure Ollama:
- Set
MODEL_NAME=llama3.2
in.env
- Set
OLLAMA_API_URL=http://host.docker.internal:11434
- Set
-
Make sure the
OLLAMA_API_URL
is set tohttp://host.docker.internal:11434
so that the backend can communicate with the Ollama server through Docker
For agent visualization and debugging:
-
Setup requirements:
- Install LangGraph Studio (.dmg)
- Ensure Docker Engine is running
- Install docker-compose (v2.22.0+)
-
Launch and configure:
- Open LangGraph Studio
- Login with LangSmith
- Select
backend/
folder - Configure via
langgraph.json
Key features:
- Real-time graph visualization
- Step-by-step debugging
- Thread management
- Node replay capabilities
Full LangGraph Documentation →
To run tests:
cd backend
poetry run pytest
The project is set up for deployment to Google Cloud Run. The cicd.yml
workflow in the .github/workflows
directory handles the CI/CD process.
To setup the project in Cloud Run:
- Create a new project in Google Cloud
- Enable the Cloud Run API
- Create a new service account and set the
GOOGLE_APPLICATION_CREDENTIALS
in your.en 72D0 v
file with the JSON file for authentication - Create a new Cloud Run service
- Set the
CLOUD_RUN_SERVICE_URL
in your.env
file with the URL of the Cloud Run service
To deploy:
- Set up the necessary secrets in your GitHub repository settings
- Push changes to the
main
branch to trigger the deployment workflow
Coming soon
backend/
: Contains the main application codeapp/
: The FastAPI applicationagents/
: AI agent implementationsmodels/
: Data modelsroutes/
: API endpointsservices/
: Business logic and external service integrations
tests/
: Basic Test suite
- Response Time: Typically 5-20 seconds for thorough research
- Resource Usage: CPU-intensive during multi-threaded searches
- Ollama Memory Requirements:
- Minimum: 16GB VRAM
- Recommended: 32GB VRAM for optimal performance
- The 1B model requires at least 4GB VRAM
- The 3B model requires at least 8GB VRAM
Contributions are welcome! Please feel free to submit a Pull Request. Let us know if you have any questions or feedback, feel free to reach out to Cam at c@encodex.dev or Ewan at ewanmay3@gmail.com
This project is licensed under the MIT License - see the LICENSE file for details.