A simple and flexible Emacs package for interacting with Large Language Models (LLMs) without changing your workflow. Chat with AI assistants, get code help, and maintain conversation history - all without leaving Emacs.
- Multi-provider support: OpenAI, Anthropic, xAI, and Perplexity
- Project-wise conversation management: Persistent project-wise chat history with org-mode integration
- Context-aware prompts: Add code snippets and text as context for better responses
- Flexible system messages: Pre-configured prompts for different use cases (code review, generation, Q&A, etc...)
- Export capabilities: Convert conversations to HTML for sharing
- Rating system: Rate and organize your conversations
- Emacs 29.1 or later
pandoc
(for markdown to org conversion)- API keys for your preferred LLM providers
- Clone this repository:
git clone https://github.com/Jef808/ellm.git
- Add to your Emacs configuration:
(add-to-list 'load-path "/path/to/ellm")
(require 'ellm)
In package.el
(package! ellm
:recipe (:host github
:repo "jef808/ellm"
:files ("ellm.el" "format_org.lua")))
In config.el
(defun my/get-ellm-api-key (provider)
"Get API key for PROVIDER from a list."
(let ((api-keys
'((openai . "your-openai-key")
(anthropic . "your-anthropic-key")
(xai . "your-xai-key")
(perplexity . "your-perplexity-key")
;; etc...
)))
(alist-get provider api-keys)))
(use-package! ellm
:custom
ellm-api-key #'my/get-ellm-api-key
:config
(ellm-setup-persistance)
(global-ellm-mode))
Unless you followed the example in the doom emacs installation example, you need to setup your api keys.
One option is using environment variables:
export OPENAI_API_KEY="your-openai-key"
export ANTHROPIC_API_KEY="your-anthropic-key"
export XAI_API_KEY="your-xai-key"
export PERPLEXITY_API_KEY="your-perplexity-key"
# ... etc
Another is to customize ellm-api-key
to use your own key retrieval function.
For example, as in the doom emacs example,
(defun my/get-ellm-api-key (provider)
(let ((api-keys
'((openai . "your-openai-key")
(anthropic . "your-anthropic-key")
(xai . "your-xai-key")
(perplexity . "your-perplexity-key")
;; etc...
)))
(alist-get provider api-keys)))
(setq ellm-api-key #'my/get-ellm-api-key)
;; Enable globally
(global-ellm-mode 1)
;; Or just in specific modes
(add-hook 'prog-mode-hook #'ellm-mode)
C-c ; c
- Open up the config buffer to set your desired provider etc...C-c ; n
- Start a new conversationC-c ; N
- Continue conversation at point (inside an ellm conversation file)C-c ; m
- Optionally, add marked region into context before making a prompt
(use-package ellm
:config
;; Enable persistence across emacs sessions
(ellm-setup-persistance)
(global-ellm-mode 1))
ellm comes with several pre-configured system messages for different tasks:
default
- General assistantcode-generation
- Code writing helpcode-review
- Code review and feedbackcode-refactoring
- Code improvement suggestionsquestion-and-answer
- Technical Q&Asoftware-architect
- System design help
Switch between them with C-c ; c
and navigating to "System Message" or:
(ellm-set-system-message 'code-generation)
Add your own system messages:
(add-to-list 'ellm-system-messages
'(my-custom-prompt :type string
:value "You are a helpful assistant specialized in..."))
- Start a conversation:
C-c ; n
- View conversations:
C-c ; ;
- Continue at point: Place cursor in a conversation and press
C-c ; N
Add context to your prompts for better responses:
- Select text/code you want to include as context
- Add to context:
C-c ; m
- View context buffer:
C-c ; x
- Chat with context:
C-c ; n
- Remove region from context
C-c ; d
- Clear all context:
C-c ; C-M-k
- Rate conversations:
C-c ; r
(1-5 stars, with color coding) - Export to HTML:
C-c ; e
- Search conversations for keywords:
C-c ; s
- Navigate messages within conversations file:
C-c ; j/k
(next/previous) - Fold conversations within conversations file:
C-c ; o
Access the configuration menu with C-c ; c
:
- Switch providers and models
- Adjust temperature and token limits
- Change system messages
- Toggle debug/test modes
Key Binding | Command | Description |
---|---|---|
C-c ; n |
ellm-context-complete |
New conversation with context |
C-c ; N |
ellm-chat-at-point |
Continue conversation at point |
C-c ; ; |
ellm-show-conversations-buffer |
Show conversations |
C-c ; c |
ellm-set-config |
Configuration menu |
C-c ; m |
ellm-add-context-chunk |
Add selected text to context |
C-c ; d |
ellm-remove-context-chunk |
Remove context at point |
C-c ; x |
ellm-view-context-buffer |
View context buffer |
C-c ; C-M-k |
ellm-clear-context |
Clear all context |
C-c ; r |
ellm-org-rate-response-and-refresh |
Rate conversation |
C-c ; e |
ellm-export-conversation |
Export to HTML |
C-c ; s |
ellm-search-in-conversations |
Search conversations |
C-c ; j/k |
ellm-org-next/previous-message |
Navigate messages |
C-c ; o |
ellm-org-fold-conversations-buffer |
Fold all conversations |
Conversations are stored as org files in ~/.ellm/
:
~/.ellm/
├── conversations-general.org # Default conversations
├── conversations-my-project.org # Project-specific chats
└── conversations-other-proj.org # Another project's chats
where the suffix is gotten from the active project.
Each conversation includes metadata (model, temperature, rating) and full chat history.
Provider | Models | Notes |
---|---|---|
OpenAI | GPT-4o, GPT-3.5-turbo | Most reliable |
Anthropic | Claude 3.5 Sonnet/Haiku | Great for coding |
xAI | Grok Beta | Newer option |
Perplexity | Sonar models | Web-enhanced |
- Use
code-generation
system message for new code - Use
code-review
for feedback on existing code - Use
mark-defun
and add relevant function defs as context before asking questions - The programming language is detected from the major mode when making the prompt
- Use
question-and-answer
system message for factual queries - Add documentation or reference material as context
- Rate conversations to build a knowledge base
- Export important conversations for future reference or for sharing
(ellm-toggle-debug-mode)
This logs all requests and responses to the *ellm-logs*
buffer.
For development or testing with minimal token usage:
(ellm-toggle-test-mode)
(also available as a toggle in the config with C-c ; c
)
This uses the smallest configured model with 10 output tokens.
Check the *ellm-logs*
buffer and see what fits:
- Rate limits: Try again or switch providers
- Context too long: Clear context or remove some chunks
- Org formatting issues: Ensure pandoc is installed on your system
Pull requests are welcomed!
This project is licensed under the MIT License - see the LICENSE file for details.