-
Notifications
You must be signed in to change notification settings - Fork 805
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
UploadException Error #1849
Comments
hey @JoshShailes - we are keeping an eye on this but haven't been able to reporduce this reliably which is the problem. would you tell us which version of Ragas you are using and then
let me know if this still persists - adding @ganeshrvel too to keep an eye on this |
Hi @JoshShailes, We have made a new version release that touched the upload feature. Could you upgrade Ragas to 0.2.11 and try the above steps mentioned by @jjmachan? |
Hi @ganeshrvel @jjmachan , I have version 0.2.11. I seem to get this issue when using gpt-3.5-turbo-16k as my evaluator llm. When using this model I get some StringIO errors during the evaluate() step and then get the upload error. Maybe you can reproduce the error using this llm as the evaluator. I am also using Azure OpenAI. The Error during the evaluate step: When I used gpt-4o as the evaluator llm, I do not get the AttributeError and then the upload is successful. |
@JoshShailes could you run this again with this is a bug on our end and we'll get it fixed |
@jjmachan The evaluate function:
|
Hey @JoshShailes thanks a lot for that - I'm not able to reproduce this with "gpt-3.5-turbo" from ragas.llms import LangchainLLMWrapper
from ragas.embeddings import LangchainEmbeddingsWrapper
from langchain_openai import ChatOpenAI
from langchain_openai import OpenAIEmbeddings
from ragas import SingleTurnSample
from ragas.metrics import AspectCritic
evaluator_llm = LangchainLLMWrapper(ChatOpenAI(model="gpt-3.5-turbo"))
evaluator_embeddings = LangchainEmbeddingsWrapper(OpenAIEmbeddings())
test_data = {
"user_input": "summarise given text\nThe company reported an 8% rise in Q3 2024, driven by strong performance in the Asian market. Sales in this region have significantly contributed to the overall growth. Analysts attribute this success to strategic marketing and product localization. The positive trend in the Asian market is expected to continue into the next quarter.",
"response": "The company experienced an 8% increase in Q3 2024, largely due to effective marketing strategies and product adaptation, with expectations of continued growth in the coming quarter.",
}
metric = AspectCritic(name="summary_accuracy",llm=evaluator_llm, definition="Verify if the summary is accurate.")
test_data = SingleTurnSample(**test_data)
await metric.single_turn_ascore(test_data) this is why I was running. If you have a codesnipet that you could share, that would be really helpfu |
Hi @jjmachan I was running the code for a dataset, you will need to add your llm to this snippet:
|
thanks alot @JoshShailes the root cause of this issue is #1831 so will be fixing that shortly and will cut a new release |
@JoshShailes fixed with fix: output parser bug by jjmachan · Pull Request #1864 · explodinggradients/ragas do try it out and feel free to close it if it is fixed for you 🙂 |
Hi @jjmachan , many thanks. I have updated to version 0.2.12 but I get another error now. I'm guessing it is because the response from gpt 3.5 is not in a JSON format as expected by the Parser.
|
Hi, I'm trying to run the code in the docs for Get Started and apart from on my very first run have been unable to upload the results to ragas app. I'm wondering if anyone has experienced this before and may have a fix? Or a way that I can test my connection to the api etc? Thanks in advance!
ERROR
UploadException: Failed to upload results: {"status":"error","status_code":500,"message":"An internal server error occured"}
The text was updated successfully, but these errors were encountered: