Tags: marcklingen/ragas
Tags
testset generation: bug fixes (explodinggradients#185) Fixes - [x] issues with multi-context question generation - [x] Error in doc filtering
ZeroDivisionError in context_relevance (explodinggradients#154) Changed: python3.9/site-packages/ragas/metrics/context_relevance.py", line 162, in _score_batch From: `score = min(len(indices) / len(context_sents), 1)` To: ``` if len(context_sents) == 0: score = 0 else: score = min(len(indices) / len(context_sents), 1)``` fixes: explodinggradients#153 Co-authored-by: devtribble <devanshu@tribble.ai>
Fix remap_column_names (explodinggradients#140) When I try to do the following, I got error: ```python ds = Dataset.from_dict( { "question": ["question"], "answer": ["answer"], "contexts": [["context"]], } ) from ragas import evaluate from ragas.metrics import Faithfulness evaluate(dataset =ds, metrics=[Faithfulness(batch_size=1)]) ``` ``` KeyError: "Column ground_truths not in the dataset. Current columns in the dataset: ['question', 'answer', 'contexts']" ``` But `ground_truths ` is not needed for `Faithfulness` . This PR is to fix it.
Improve context relevancy (explodinggradients#112) ## What Improve context relevancy prompt ## Why LLM has trouble doing candidate sentence extraction. The current prompt has caused issues where context relevancy becomes zero due to a suboptimal prompt. This prompt is tested on data from Arxiv, StackOverflow, etc. fixes: explodinggradients#109
Context Recall (explodinggradients#96) ## What Context recall estimation using annotated answers as ground truth ## Why Context recall was a highly requested feature, as it is one of the main pain points where pipeline error occurs in RAG systems ## How Introduced a simple paradigm similar to faithfulness --------- Co-authored-by: jjmachan <jamesjithin97@gmail.com>
docs: notebook for langsmith integration (explodinggradients#85)
PreviousNext