Your command-line companion that demystifies niche libraries and brings clarity to complex codebases
Developers waste too much time searching for documentation, switching between browser tabs, and asking the same questions over and over. Alexandria eliminates that friction by scanning your project, indexing its dependencies, and using an AI-powered local assistant to answer your questions—instantly.
Whether you're working in Python, Node.js, Java, Rust, or Go, Alexandria automatically detects the libraries you're using and fetches relevant documentation. Instead of Googling "How do I use FastAPI middleware?", just:
alexandria chat
How do I use FastAPI middleware?
Powered by FAISS for fast semantic search and Ollama for local AI reasoning, Alexandria works entirely offline—no API calls, no sending your project data to external services.
-
Junior developers needing quick answers without jumping between docs
-
Mid-level engineers working across multiple languages
-
Senior developers who want a fast, private way to query dependencies in their proprietary projects
No more endless Googling. No more tab-switching. Just ask, and get your answer instantly.
git clone https://github.com/yourusername/alexandria.git
cd alexandria
Alexandria uses Ollama for local AI reasoning. If you haven't installed Ollama yet, follow these steps: Visit the Ollama website for download and installation instructions. Install Ollama according to your operating system's guidelines. Verify your installation by running:
Copy
ollama version
Dependencies: Install any required dependencies. For example, if Alexandria is built with Python, run:
pip install -r requirements.txt
Configuration: Customize your Alexandria configuration if needed. A sample configuration file (config.example.json) is provided—copy it to config.json and adjust the settings.
Build (if necessary): If Alexandria requires building or compiling components, follow the build instructions in the BUILD.md file.
🔧 Python – Core CLI development
🔧 FAISS – Embedding creation and High-speed local semantic search
🔧 Ollama – Self-hosted AI model for answering quer
881E
ies
🔧 Rich CLI – Clean, intuitive terminal interface
✅ Scans dependencies automatically (Python, Node, Java, etc.)
✅ Uses FAISS for fast semantic search
✅ Works entirely offline (no API calls)
✅ Supports custom model selection via Ollama
✅ Persistent embeddings, updates when dependencies change
Alexandria automatically detects dependencies by scanning import statements (Python) and package manager files (package.json, requirements.txt, Cargo.toml, etc.). It avoids unnecessary indexing (e.g., skips venv/ for Python).
For each detected library, Alexandria fetches relevant documentation links. It then generates embeddings (vector representations of the documentation) using FAISS, storing them persistently for fast retrieval.
When you ask a question (alexandria ask "How do I use FastAPI middleware?"), Alexandria retrieves the most relevant sections of documentation. It then uses Ollama’s local LLM to generate a concise, useful response—without requiring an internet connection.
Since all embeddings are stored locally, Alexandria doesn’t need API calls to function. Rerunning alexandria update refreshes embeddings when dependencies change, keeping results accurate.
This project is licensed under the MIT License. See the LICENSE file for details.