[SystemSafety] Engineering with the mother of all prompts
Les Chambers
les at chambers.com.au
Sat Jul 8 04:53:09 CEST 2023
Hi All
In working with ChatGPT, it turns out that not all prompts are equal. Some
return more accurate/useful information than others. To get help with the
emerging discipline of prompt engineering the best mentor is ChatGPT itself.
Enter the Mother Prompt.
Courtesy of the Exponential View by Azeem Azhar, I have found the following
extremely useful:
You may ask ChatGPT to help you design a prompt, according to your specific
needs. Ethan Mollick has usefully designed this mother prompt:
GPT-4 Prompt: Help me craft a really good prompt for ChatGPT.
First, ask me what I want to do. Pause and wait for my answer. Ask questions to
clarify as needed. Second, once you have the information suggest a prompt that
include context, examples, and chain of thought prompting where the prompt goes
step by step through the problem. Third, show what your response as ChatGPT
would be to the prompt. Fourth, ask if the user has any suggestions and help
them revise the prompt.
Prompt efficiency can also be improved by using tried-and-true prompt
templates. For example:
Is there a relationship between AAAA and BBBBBB? If so, give me 7 examples.
Apply the concept of Y to X. Give me 7 options. Lets take this step by step to
make sure we have the correct answer.
Lets analyse X by Y. Give me multiple scenarios, each with a quantitative
assessment of Z and the assumptions underlying the prediction.
{Example:
Let's analyse the future global adoption of electric vehicles by 2050. Give me
multiple scenarios, each with a precise percentage of adoption and the
assumptions underlying the prediction.
}
What would need to be true for [unlikely event] to happen? Give me a list of
tangible elements, and how to measure them.
{Example:
What would need to be true for AI not to enable economic growth? Give me a list
of tangible elements, and how to measure them.
}
Background
While large language models like GPT-4 generate text based on probabilities
learned from training data, the output isnt deterministic for a given prompt.
Instead, it includes a level of randomness, often controlled by a parameter
known as temperature.
When you provide the same prompt to a language model multiple times, its like
rolling a weighted die each time. The weights (probabilities) havent changed,
but because theres an element of randomness, you can still get different
results.
This means that how you prompt matters, as the model may choose different
options depending on the wording and its associations. It also means that the
output will always include some element of randomness. The more outputs you
get, the more ground they will cover. This is an important principle that I
use: ask for multiple options, so that I can decide what is most useful for me,
and I can see a range of possibilities.
Another reason to ask for multiple answers is the evidence around chain of
thought (CoT) prompting. CoT is a technique that has been shown to drastically
improve output by encouraging a chatbot to explain its reasoning. Further
testing has found that saying Lets work this out in a step by step way to be
sure we have the right answer yields the best zero-shot prompt results. I
incorporate this into my prompts, or ask for multiple results, depending on my
needs.
Happy prompting.
Cheers
Les
--
Les Chambers
les at chambers.com.au
+61 (0)412 648 992
More information about the systemsafety
mailing list