You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have implemented this application which leverages internal documentation if it can find an answer else it provides from the pre-trained GPT datasets.
I am seeing some weird behavior that if the user asks a question on the internal documentation, the app works well and can give answers but if the user asks the first question very generic like "Who is XYZ celebrity" or even a greeting "hello", that the application has to find an answer from the pre-trained datasets, it is not able to answer any further questions based on the internal documentation then.
I have tried to modify the configuration parameters like temperature, top_p, or even the system prompt but the behavior is still there.
Any idea why is this happening?
The text was updated successfully, but these errors were encountered:
I have seen similar behavior. When looking at the logs in debug mode, I have seen that the Azure Search AI citations are being added to the OpenAI request in the citations (in role: tool). However for some reason it is not citing or saying the information is not there even when I see the citation added to the request.
I have seen similar behavior when not the first questions, sometimes also other questions down the conversation.
Hi @rochaktarika123 you might try adjusting the 'inScope' parameter, or strictness. Turning off 'inScope' or reducing the strictness value will make the model more likely to answer outside of the internal documentation data.
We have implemented this application which leverages internal documentation if it can find an answer else it provides from the pre-trained GPT datasets.
I am seeing some weird behavior that if the user asks a question on the internal documentation, the app works well and can give answers but if the user asks the first question very generic like "Who is XYZ celebrity" or even a greeting "hello", that the application has to find an answer from the pre-trained datasets, it is not able to answer any further questions based on the internal documentation then.
I have tried to modify the configuration parameters like temperature, top_p, or even the system prompt but the behavior is still there.
Any idea why is this happening?
The text was updated successfully, but these errors were encountered: