Replies: 1 comment
-
It is a great idea to add context so LLM can tailor responses to the user. This boils down to crafting the right prompt to use for getting responses. The flip side of this is that more context implies more tokens for LLM to process and that will have impact on the cost. The impact on cost will be directly proportional to the length of prompt created to satisfy the above criteria. It would be great to get some feedback from the community on the cost vs customization aspects. Also it would be great if you can do some research on prompt engineering and make an attempt at crafting a prompt that satisfies the above criteria. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
User Story: Personalized Responses Based on User Background
As a user, I want to provide relevant background information to the LLM (Language Model) so that it can tailor its responses to my specific context. This will enhance the quality and relevance of the LLM’s interactions with me.
Acceptance Criteria:
User Input: The LLM should allow users to input details about their background, including job experience, skills, and project history.
Contextual Adaptation: When responding to user queries, the LLM should consider the provided background information and adjust its answers accordingly.
Personalization: The LLM’s responses should reflect an understanding of the user’s unique context, preferences, and expertise.
Privacy and Security: The LLM must handle user-provided information securely and ensure that no sensitive data is exposed.
Feedback Loop: Users should have the ability to update or modify their background information over time, allowing the LLM to adapt continuously.
Beta Was this translation helpful? Give feedback.
All reactions