8000 gptel: Make gptel-model variable definition dynamic by pabl0 · Pull Request #606 · karthink/gptel · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

gptel: Make gptel-model variable definition dynamic #606

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Feb 7, 2025

Conversation

pabl0
Copy link
Contributor
@pabl0 pabl0 commented Feb 2, 2025
  • gptel.el (gptel-model): Ensure the docstring documentation and customization options are current (in sync with
    `gptel--openai-models') by dynamically generating them instead of hard-coding.

@karthink
Copy link
Owner
karthink commented Feb 4, 2025

Thanks! Now that I look at the code, this list is ChatGPT specific and alternatives like Claude are very popular now (as opposed to an year ago), so I'm thinking we don't even need to "hard-code" the options in the docstring. We can just say that gptel-model should be the name of a model as a symbol, and continue to provide the models for the OpenAI backend as available options since it's the default.

What do you think?

@psionic-k
Copy link
Contributor

This looks like an improvement. I would question having a hard-coded default value. Can't we just decide the default based on the first one a configuration is found for? That might need to happen downstream. This variable would be nil in most cases and could be set / bound only when there was more than one option and the default wasn't already first.

The way things are going, we're rapidly going to wind up with several models available being the norm. Some for completions, some for translation, some for specific languages, some local, some remote. I don't expect this variable to have meaning in that direction.

@karthink
Copy link
Owner
karthink commented Feb 5, 2025

Can't we just decide the default based on the first one a configuration is found for? That might need to happen downstream. This variable would be nil in most cases and could be set / bound only when there was more than one option and the default wasn't already first.

The user is expected to set gptel-model in their configuration. If they don't, it defaults to gpt-4o-mini. How is the current behavior different from what you describe?

The way things are going, we're rapidly going to wind up with several models available being the norm. Some for completions, some for translation, some for specific languages, some local, some remote. I don't expect this variable to have meaning in that direction.

  • There's value in simplicity. Users like gptel because it's simple to use.
  • Providing a gptel-translation-model, gptel-completion-model and so on doesn't scale and imposes constraints from the top. We don't have to decide the boundaries of users' tasks.
  • It won't stop there, because a translation model will also need a translation system message, and so on.
  • Users can use something like the presets system (gptel agents or configuration presets #542) to define everything needed for an LLM task in one place, and switch to this preset buffer-locally or for the next request.

@pabl0
Copy link
Contributor Author
pabl0 commented Feb 5, 2025

I agree that listing the models in the docstring is unnecessary, so I'll drop that part. However, we might consider including the available models from the default OpenAI backend or the currently active backend. I'm uncertain option would be the best approach for listing them, and how to get the list of models of the current backend.

* gptel.el (gptel-model): Ensure the docstring documentation and
customization options are current (in sync with
`gptel--openai-models') by dynamically generating them
instead of hard-coding.
@karthink
Copy link
Owner
karthink commented Feb 6, 2025

This looks good to me.

However, we might consider including the available models from the default OpenAI backend or the currently active backend.

While you can do something like this to use the currently active backend:

:type `(choice
	(symbol :tag "Specify model name")
	,@(mapcar (lambda (model)
		    (list 'const :tag (symbol-name (car model))
		     (car model)))
           (gptel-backend-models gptel-backend)))

it won't actually work because the defcustom is not dynamically evaluated. So changing the active backend will not cause the list of choices to be updated, and it will always be what it was set to at load-time, which is just gptel--openai-models.

@karthink
Copy link
Owner
karthink commented Feb 6, 2025

Can I go ahead and merge this now?

@pabl0
Copy link
Contributor Author
pabl0 commented Feb 7, 2025

Yes I think it's good enough, thanks.

@karthink karthink merged commit e235f1d into karthink:master Feb 7, 2025
@karthink
Copy link
Owner
karthink commented Feb 7, 2025

@pabl0 Thanks for the PR!

marcbowes pushed a commit to marcbowes/gptel that referenced this pull request Mar 19, 2025
* gptel.el (gptel-model): Ensure the docstring documentation and
customization options are current (in sync with
`gptel--openai-models') by dynamically generating them
instead of hard-coding.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants
0