An open-source platform for running local AI agents that enhance your computing experience while preserving privacy.
Demo:
observerDemo.mp4
We need to wrap Ollama to use https instead of http so that the browser can connect to it. This is done with self-signed SSL certificates.
# Make sure to have [Ollama](https://ollama.com) installed
# For local inference run observer-ollama
pip install observer-ollama
# Click on the link provided so that your browser accepts self signed CERTS (signed by your computer)
# OLLAMA-PROXY ready
# β Local: https://localhost:3838/
# β Network: https://10.0.0.138:3838/
# Click on proceed to localhost (unsafe), if "Ollama is running" shows up, you're done!
# Go to webapp:
app.observer-ai.com
# Enter your inference IP (localhost:3838) on the app header.
Creating your own Observer AI agent is simple and accessible to both beginners and experienced developers.
- Navigate to the Agent Dashboard and click "Create New Agent"
- Fill in the "Configuration" tab with basic details (name, description, model, loop interval)
- Use the "Context" tab to visually build your agent's input sources by adding blocks:
- Screen OCR block: Captures screen content as text via OCR
- Screenshot block: Captures screen as an image for multimodal models
- Agent Memory block: Accesses other agents' stored information
The "Code" tab now offers a notebook-style coding experience where you can choose between JavaScript or Python execution:
JavaScript agents run in the browser sandbox, making them ideal for passive monitoring and notifications:
// Remove Think tags for deepseek model
const cleanedResponse = response.replace(/<think>[\s\S]*?<\/think>/g, '').trim();
// Preserve previous memory
const prevMemory = await getMemory();
// Get time
const time = time();
// Update memory with timestamp
appendMemory(`[${time}] ${cleanedResponse}`);
Note: any function marked with
*
takes anagentId
argument.
If you omitagentId
, it defaults to the agent thatβs running the code.
Available utilities include:
time()
β Get the current timestamppushNotification(title, options)
β Send notificationsgetMemory(agentId)*
β Retrieve stored memory (defaults to current agent)setMemory(agentId, content)*
β Replace stored memoryappendMemory(agentId, content)*
β Add to existing memorystartAgent(agentId)*
β Starts an agentstopAgent(agentId)*
β Stops an agent
Python agents run on a Jupyter server with system-level access, enabling them to interact directly with your computer:
#python <-- don't remove this!
print("Hello World!", response, agentId)
# Example: Analyze screen content and take action
if "SHUTOFF" in response:
# System level commands can be executed here
import os
# os.system("command") # Be careful with system commands!
The Python environment receives:
response
- The model's outputagentId
- The current agent's ID
A simple agent that responds to specific commands in the model's output:
//Clean response
const cleanedResponse = response.replace(/<think>[\s\S]*?<\/think>/g, '').trim();
//Command Format
if (cleanedResponse.includes("COMMAND")) {
const withoutcommand = cleanedResponse.replace(/COMMAND:/g, '');
setMemory(`${await getMemory()} \n[${time()}] ${withoutcommand}`);
}
To use Python agents:
- Run a Jupyter server on your machine
- Configure the connection in the Observer AI interface:
- Host: The server address (e.g., 127.0.0.1)
- Port: The server port (e.g., 8888)
- Token: Your Jupyter server authentication token
- Test the connection using the "Test Connection" button
- Switch to the Python tab in the code editor to write Python-based agents
Save your agent, test it from the dashboard, and export the configuration to share with others!
We welcome contributions from the community! Here's how you can help:
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'feat: add amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- GitHub: @Roy3838
- Project Link: https://github.com/Roy3838/observer-ai
Built with β€οΈ by the Observer AI Community