8000 Multiple backends · Issue #2 · graniet/rllm · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
Multiple backends #2
Open
Open
@alilee

Description

@alilee

This is awesome - congrats.

Architecturally, it would be great if you could specify one or more backends and use async to process input that has come back from any of them. Do you think this is how it would be done? Or should you put each llm chat in a discrete async context? In any case, a parallel pattern would be very useful for evals.

Also, do you think it would be possible to make this usable from wasm?

Metadata

Metadata

Assignees

Labels

enhancementNew feature or request

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions

    0