You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[*] I have checked the documentation and related resources and couldn't resolve my bug.
Describe the bug
The get start code about RAG evaluation has runtime exceptions. KeyError: 0. It is because the faithfulness is nan for some evaluation cases. If I remove the faithfulness metrics. It runs smoothly.
`
Prompt fix_output_format failed to parse output: The output parser failed to parse the output including retries.
Prompt claim_decomposition_prompt failed to parse output: The output parser failed to parse the output including retries.
Exception raised in Job[17]: RagasOutputParserException(The output parser failed to parse the output including retries.)
/home/trisha/Desktop/.venv/lib/python3.10/site-packages/ragas/metrics/_answer_similarity.py:88: RuntimeWarning: invalid
value encountered in divide
embedding_2_normalized = embedding_2 / norms_2
..........
"output": prompt_trace.outputs.get("output", {})[0],
KeyError: 0
`
Error trace:
result = evaluate(dataset=dataset, metrics=metrics)
Code to reproduce the error:
`
from ragas.metrics import LLMContextRecall, Faithfulness, FactualCorrectness, SemanticSimilarity
from ragas import evaluate
import pandas as pd
from datasets import Dataset
from langchain_community.chat_models import ChatOllama
from langchain_community.embeddings import OllamaEmbeddings
from ragas.llms import LangchainLLMWrapper
from ragas.embeddings import LangchainEmbeddingsWrapper
[*] I have checked the documentation and related resources and couldn't resolve my bug.
Describe the bug
The get start code about RAG evaluation has runtime exceptions. KeyError: 0. It is because the faithfulness is nan for some evaluation cases. If I remove the faithfulness metrics. It runs smoothly.
Ragas version: 0.28.0
Python version: python 3.9
Code to Reproduce
Share code to reproduce the issue
The code is in the https://docs.ragas.io/en/stable/getstarted/rag_evaluation/
Error trace
while running this line: results = evaluate(dataset=eval_dataset, metrics=metrics), I get the exception:
outputs={'faithfulness': nan},
KeyError: 0
Expected behavior
A clear and concise description of what you expected to happen.
Output the top evaluations results
Additional context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered: