-
Notifications
You must be signed in to change notification settings - Fork 0
Prompt Engineering
- Prompt engineer - purpose
- Chat Completions API
- Definition: https://platform.openai.com/docs/guides/text-generation/chat-completions-api
- API Reference: https://platform.openai.com/docs/api-reference/chat
- API
- Purpose: API receives a question, either a chat message or a conversation history, and then responds with relevant results based on those requests.
- Request:
- model: gpt-4, gpt-3.5-turbo
- messages:
- type: array
- properties:
- role:
system
(helps set the behavior of the assistant),user
,assistant
- content
- role:
- Write clear instructions
These models can’t read your mind. If outputs are too long, ask for brief replies. If outputs are too simple, ask for expert-level writing. If you dislike the format, demonstrate the format you’d like to see. The less the model has to guess at what you want, the more likely you’ll get it.
- Provide reference text
Language models can confidently invent fake answers, especially when asked about esoteric topics or for citations and URLs. In the same way that a sheet of notes can help a student do better on a test, providing reference text to these models can help in answering with fewer fabrications.
- Split complex tasks into simpler subtasks
Just as it is good practice in software engineering to decompose a complex system into a set of modular components, the same is true of tasks submitted to a language model. Complex tasks tend to have higher error rates than simpler tasks. Furthermore, complex tasks can often be re-defined as a workflow of simpler tasks in which the outputs of earlier tasks are used to construct the inputs to later tasks.
- Give the model time to "think"
If asked to multiply 17 by 28, you might not know it instantly, but can still work it out with time. Similarly, models make more reasoning errors when trying to answer right away, rather than taking time to work out an answer. Asking for a "chain of thought" before an answer can help the model reason its way toward correct answers more reliably.
- Use external tools
Compensate for the weaknesses of the model by feeding it the outputs of other tools. For example, a text retrieval system (sometimes called RAG or retrieval augmented generation) can tell the model about relevant documents. A code execution engine like OpenAI's Code Interpreter can help the model do math and run code. If a task can be done more reliably or efficiently by a tool rather than by a language model, offload it to get the best of both.
- Test changes systematically
Improving performance is easier if you can measure it. In some cases a modification to a prompt will achieve better performance on a few isolated examples but lead to worse overall performance on a more representative set of examples. Therefore to be sure that a change is net positive to performance it may be necessary to define a comprehensive test suite (also known an as an "eval").
Each of the strategies listed above can be instantiated with specific tactics. These tactics are meant to provide ideas for things to try. They are by no means fully comprehensive, and you should feel free to try creative ideas not represented here.
Worse | Better |
---|---|
Who’s president? | Who was the president of Mexico in 2021? |
Role | Content |
---|---|
SYSTEM | When I ask something, you can answer me angrily. |
USER | Who was the President of Vietnam in 2019? |
- Delimiters like triple quotation marks, XML tags, section titles, etc. can help demarcate sections of text to be treated differently.
Role | Content |
---|---|
USER | Please translate the text delimited by triple quotes into Vietnamese """ hello """ |
For straightforward tasks such as these, using delimiters might not make a difference in the output quality. However, the more complex a task is the more important it is to disambiguate task details. Don’t make the model work to understand exactly what you are asking of them.
- Zero-Shot Prompting
Role | Content |
---|---|
USER | Generate 10 possible names for my new dog. |
- One-Shot Prompting
Role | Content |
---|---|
USER | Generate 10 possible names for my new dog. A dog name that I like is Banana. |
- Few-Shot Prompting
Role | Content |
---|---|
USER | Generate 10 possible names for my new dog. Dog names that I like include: – Banana – Kiwi – Pineapple – Coconut |
Providing
general instructions that apply to all examples is generally more efficient
than demonstrating all permutations of a task by example, but in some cases providing examples may be easier.For example: Generate 10 possible names for my new dog. I want its name to be a type of fruit.
- Example for 3 tactics: Specify the steps required to complete a task, Provide examples, Specify the desired length of the output
Role | Content |
---|---|
SYSTEM | Hãy sử dụng các bước sau để phản hồi lại đầu vào của user. Bước 1 : bạn sẽ được cung cấp 1 câu hỏi, đầu tiên bạn hãy trả lời câu hỏi của user Bước 2: bạn hãy dịch câu trả lời ở Bước 1 sang tiếng Anh, và câu trả lời phải bao gồm cả tiếng Việt và tiếng Anh. Định dạng của câu trả lời ở bước 2 sẽ như thế này: - Tiếng việt (Vietnamese) - Hello (Hello) - Xin chào (Hello) |
USER | Chat GPT là gì? hãy tóm tắt câu trả lời khoảng 5 từ. |
- Example:
Role | Content |
---|---|
SYSTEM | You will be provided with a document delimited by triple quotes and a question. Your task is to answer the question using only the provided document and to cite the passage(s) of the document used to answer the question. If the document does not contain the information needed to answer this question then simply write: "Insufficient information." If an answer to the question is provided, it must be annotated with a citation. Use the following format for to cite relevant passages ({"citation": …}). |
USER | <insert articles, each delimited by triple quotes> Question: <insert question here> |
Embeddings
can be used to implement efficient knowledge retrieval. See the tactic "Use embeddings-based search to implement efficient knowledge retrieval" for more details on how to implement this.
For dialogue applications that require very long conversations, summarize or filter previous dialogue
- Example:
- function_call - Deprecated. Deprecated in favor of tool_choice.