Local Retrieval Augmented Generation for your Obsidian notes
Mini-RAG lets you chat with a locally running LLM, in the context of selected Obsidian notes and folders. For the LLM, you can select any locally installed Ollama model (see: Configure Mini-Rag).
If you don't already have Ollama installed, you can download and install Ollama here.
This is necessary because Mini-RAG relies on a locally running instance of Ollama for its responses. This is the same reason that Mini-RAG is currently a desktop-only plugin.
Open "options" by clicking on the gear icon then navigate to Community Plugins > Mini-RAG > Options
. Here you can set the:
- Ollama URL: If left unset, Ollama's default URL is used
- Model: From a dropdown list of AI Models installed on your local Ollama setup
- Temperature: Higher temperatures give more creative response, but also lead to more hallucinations
- Enable context-free chats: Provides the option to chat with an LLM without the context of a note or folder
This is done from the right-click context menu. You will see the "Mini-RAG" option when you:
- Right-Click within a note
- Right-Click a note in the sidebar
- Right-Click a folder in the sidebar
- Open a note's triple-dot menu
To save a Mini-RAG conversation, click the "Save" button. If you continue the conversation after this, you will need to click "save" again to update the saved conversation.
For more about the author visit JJWheatley.com