Externalizing prompts

36 views
Skip to first unread message

Emmanuel Bernard

unread,
Oct 18, 2024, 10:06:06 AM10/18/24
to Quarkus Development mailing list
Here is an article on the merits of externalizing LLM prompts form applications

I think we are one step better than the plain java app since our prompts are in annotations.
Which means we can offer a nice DevUI tool to discover and edit.
I know we have ways to externalize the prompts (though not seen it in any documentation nor IRL). So the next step might already very well be implemented.

One thing I don't like with externalization is that it's forced and you really need it sometimes.
So how about the following
We could maybe have some convention where one can override prompts optionally

```
@RegisterAiService
public internface WombatChat {

@UserMessage("""
Wonbastifigy {topic} to {strength} on a scale of 1 to 10
""'
String wombasticify(String topic, int strength)
}

application.properties
quarkus.ai.usermessage.WombatChat.wombasticify[.0] = Wonbastifigy {topic} to {strength} on a scale of 1 to 100

A few things
- externalisation does not have to be planned (e.g. @UserMessage("{{external.123}}")
- can use the fqcn but if not anbiguous, you can use the unqualified class name
- .0 is optional and cater for the case where several UserMessage are present.

An alternative is to host the rewritten prompts on a separate class (vs the application.properties) but that feels too geeky even though `"""` is effectively nicer than what properties files can do.

Thoughts?

Emmanuel Bernard

unread,
Oct 18, 2024, 10:07:55 AM10/18/24
to Quarkus Development mailing list
Though, we might want a dedicated file vs reusing properties because of the things mentioned in the article (like different prompts for different models)

Georgios Andrianakis

unread,
Oct 18, 2024, 10:19:21 AM10/18/24
to quark...@googlegroups.com
Hey Emmanuel,

Currently the annotations also allow you to "link" to a file on classpath that contains the corresponding prompt.

Your idea of using application.properties is certainly interesting (and can be easily implemented), but I see two issues:
  • If prompts are long, the file starts to look really really bad (although with YAML, it would be better)
  • The IDE can't really suggest you what to use as the key
That's not to say we shouldn't do it, I am just trying to think through the various points.

--
You received this message because you are subscribed to the Google Groups "Quarkus Development mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to quarkus-dev...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/quarkus-dev/CANYWk7MJhPT-MuVBhKN10qMXiksd1FM6L-W%2By1KLTdbZZRHu%2BQ%40mail.gmail.com.


--

Georgios Andrianakis

Independent Contractor


Mario Fusco

unread,
Oct 18, 2024, 10:39:21 AM10/18/24
to quark...@googlegroups.com
Hi Emmanuel,

The article you linked is very interesting, but given my (relatively limited) experience in developing LLM-based apps I believe that I would feel less comfortable in externalizing the LLM prompts rather than keeping them in the AI service annotations as we do know. In particular:
  • Having the LLM prompt together with the AI service interface allows to keep the whole service definition in a single place
    • This is also more important when you have placeholders in the LLM prompt that point to arguments of the service method
  • It is becoming very common to provide the LLM with relatively long prompts to give it as much context as possible (see for instance this example taken from the demo app that Lize and I used for our talk at Devoxx) and I'm afraid that adding such prompts to the application.properties file will make it unreadable and harder to maintain.
    • Keeping all the prompts in a single external (YAML?) file will be a little bit better, but still with a complex application I would end up wasting time looking for the prompt of the specific service in which I'm interested in that huge file, instead of seeing it directly in the annotation of the service interface.
Of course this point of view is simply based on my typical workflow and others may have different needs or points of view.

Mario

Dmytro Liubarskyi

unread,
Oct 18, 2024, 3:34:30 PM10/18/24
to quark...@googlegroups.com
Hi all,

A few thoughts:
- Sometimes "prompt engineer" in a team is a non-technical person, and wants to play/tweak prompts in the application, so in this case having them in the code or property files might not be very convenient
- For evaluations, A/B testing, etc, it is more convenient to keep prompts externally, close to the place where evaluation is happening and where results are stored. These systems usually do prompt versioning as well.

I would probably not restrict users to stick to a specific (external) location for prompts, they should be able to keep them where they prefer.

Sergey Beryozkin

unread,
Oct 18, 2024, 5:34:51 PM10/18/24
to quark...@googlegroups.com
Hi All,

On Fri, Oct 18, 2024 at 3:34 PM Dmytro Liubarskyi <dliu...@redhat.com> wrote:
Hi all,

A few thoughts:
- Sometimes "prompt engineer" in a team is a non-technical person, and wants to play/tweak prompts in the application, so in this case having them in the code or property files might not be very convenient
- For evaluations, A/B testing, etc, it is more convenient to keep prompts externally, close to the place where evaluation is happening and where results are stored. These systems usually do prompt versioning as well.

I would probably not restrict users to stick to a specific (external) location for prompts, they should be able to keep them where they prefer.


I guess Quardrails will be essential in such cases

Thanks Sergey

Max Rydahl Andersen

unread,
Oct 20, 2024, 6:07:56 PM10/20/24
to Quarkus Development mailing list
Given the templates are Qute templates - isn't there already many ways to externalize them as property values or even files via something like roq-data ? 

Emmanuel Bernard

unread,
Oct 21, 2024, 2:47:13 PM10/21/24
to quark...@googlegroups.com
Yes realding the qute reference guide I can see a few things
There is a default mapping between "resource" and file name which we could reuse for consistency. Though we might need to adapt it a bit in the context of multi prompts per method.
There is a notion of variant which could be used to separate prompts for different LLMs.

What I don't know is whether it's optionally externalizable like I described in the email.
So something on the mapping seems to have to be figured out.



Reply all
Reply to author
Forward
0 new messages