Tags: davizucon/ragas
Tags
fix: faithfulness (explodinggradients#787) fixes : explodinggradients#785
docs: minor corrections (explodinggradients#747) fixes: explodinggradients#665 explodinggradients#746
Add support for optional max concurrency (explodinggradients#643) **Added optional Semaphore-based concurrency control for explodinggradients#642** As for the default value for `max_concurrency`, I don't know the ratio of API users vs. local LLM users, so the proposed default is an opinionated value of `16` * I *think* more people use OpenAI API for now vs. local LLMs, thus default is not `-1` (no limit) * `16` seems to be reasonably fast and doesn't seem to hit throughput limit in my experience **Tests** Embedding for 1k documents finished in <2min and subsequent Testset generation for `test_size=1000` proceeding without getting stuck: <img width="693" alt="image" src="https://github.com/explodinggradients/ragas/assets/6729737/d83fecc8-a815-43ee-a3b0-3395d7a9d244"> another 30s passes: <img width="725" alt="image" src="https://github.com/explodinggradients/ragas/assets/6729737/d4ab08ba-5a79-45f6-84b1-e563f107d682"> --------- Co-authored-by: Jithin James <jamesjithin97@gmail.com>
feat(llms.json_load): Recursively load json lists (explodinggradients… …#593) Slightly broken json are protected against by the function `ragas.llms.json_load.JsonLoader._find_outermost_json`. However, I've found that for many metrics, gpt4 can often return slightly broken json lists, for which this function returns only the first valid json. Here we wrap `_find_outermost_json` with `_load_all_jsons` which calls it recursively to load the full json list. I.e. expected output for `'{"1":"2"}, ,, {"3":"4"}]'` is `[{'1': '2'}, {'3': '4'}]` --------- Co-authored-by: jjmachan <jamesjithin97@gmail.com>
fix: answer_correctness embedding (explodinggradients#513)
fix: handle edge cases in prompt processing (explodinggradients#374)
PreviousNext