8000 [Feat]: Auto-Evaluation Metrics Based for every LLM Request · Issue #470 · openlit/openlit · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

[Feat]: Auto-Evaluation Metrics Based for every LLM Request #470

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
1 task done
patcher9 opened this issue Oct 25, 2024 · 0 comments · Fixed by #614
Closed
1 task done

[Feat]: Auto-Evaluation Metrics Based for every LLM Request #470

patcher9 opened this issue Oct 25, 2024 · 0 comments · Fixed by #614
Assignees
Labels
client Issue related to OpenLIT Client 🚀 Feature New feature or request

Comments

@patcher9
Copy link
Contributor
patcher9 commented Oct 25, 2024

🚀 What's the Problem?

Right now OpenLIT can show the LLM events but their is no automated way to see if a event performed good or bad. This can be done using Eval metrics that generally run against a dataset

💡 Your Dream Solution

Auto Evalutaion scoring for all LLM requests traced and stored in OpenLIT

🤔 Seen anything similar?

NA

🖼️ Pictures or Drawings

NA

👐 Want to Help Make It Happen?

  • Yes, I'd like to volunteer and help out with this!
@patcher9 patcher9 added 🚀 Feature New feature or request ✋ Up for Grabs The issue is Up for Grabs client Issue related to OpenLIT Client labels Oct 25, 2024
@patcher9 patcher9 removed the ✋ Up for Grabs The issue is Up for Grabs label Mar 10, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
client Issue related to OpenLIT Client 🚀 Feature New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants
0