Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to generate out of context questions to evaluate the hallucination of a LLM #1785

Open
parkerzf opened this issue Dec 23, 2024 · 0 comments
Labels
module-testsetgen Module testset generation question Further information is requested

Comments

@parkerzf
Copy link

[*] I checked the documentation and related resources and couldn't find an answer to my question.

Your Question
To evaluate the hallucination of a LLM, we would like to generate several out of context questions using Ragas. I checked the doc and wonder if I could use the persona concept as follows.

Code Examples

from ragas.testset.persona import Persona

persona_external = Persona(
    name="External",
    role_description="Don't know much about the company and ask a lot of unrelated questions.",
)
@parkerzf parkerzf added the question Further information is requested label Dec 23, 2024
@parkerzf parkerzf changed the title How to generate out of context questions to evaluate if LLM is How to generate out of context questions to evaluate the hallucination of a LLM Dec 23, 2024
@sahusiddharth sahusiddharth added the module-testsetgen Module testset generation label Jan 10, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module-testsetgen Module testset generation question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants