8000 GitHub - jjwheatley/mini-rag
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

jjwheatley/mini-rag

Repository files navigation

Mini-RAG

Local Retrieval Augmented Generation for your Obsidian notes


What is Mini-RAG?

Mini-RAG lets you chat with a locally running LLM, in the context of selected Obsidian notes and folders. For the LLM, you can select any locally installed Ollama model (see: Configure Mini-Rag).

Setting Up Mini-RAG

Install Ollama

If you don't already have Ollama installed, you can download and install Ollama here.

This is necessary because Mini-RAG relies on a locally running instance of Ollama for its responses. This is the same reason that Mini-RAG is currently a desktop-only plugin.

Configure Mini-RAG

Open "options" by clicking on the gear icon then navigate to Community Plugins > Mini-RAG > Options. Here you can set the:

  • Ollama URL: If left unset, Ollama's default URL is used
  • Model: From a dropdown list of AI Models installed on your local Ollama setup
  • Temperature: Higher temperatures give more creative response, but also lead to more hallucinations
  • Enable context-free chats: Provides the option to chat with an LLM without the context of a note or folder

Using Mini-RAG

Opening a Mini-RAG Chat

This is done from the right-click context menu. You will see the "Mini-RAG" option when you:

  • Right-Click within a note
  • Right-Click a note in the sidebar
  • Right-Click a folder in the sidebar
  • Open a note's triple-dot menu

Saving Conversations

To save a Mini-RAG conversation, click the "Save" button. If you continue the conversation after this, you will need to click "save" again to update the saved conversation.


Author

For more about the author visit JJWheatley.com

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

No packages published
0