Why our prompts matter when engaging with ChatGPT


As the likes of ChatGPT have grown in popularity, so too have discussions about its impact on the workplace. As with any new technology, it’s likely that new jobs will emerge that capitalize on these new capabilities, with "prompt engineer" being one that has gained a degree of popularity among commentators.

Whether a dedicated job role interacting with ChatGPT and its ilk will emerge remains to be seen. However, it does seem likely that all of us will need to gain a degree of familiarity with talking to generative AI bots in a way that elicits a useful response.

The right prompts

Research from USC’s Viterbi School of Engineering explores how we can construct prompts that get the right kind of answer, with the paper highlighting the importance of interacting appropriately if we want feedback that is robust and reliable.

"We demonstrate that even minor prompt variations can change a considerable proportion of predictions," the researchers explain.

The researchers examined four different ways in which prompts can vary:

Each of these variations was tested across 11 tasks commonly used in natural language processing (NLP) research. Each of the tasks involved things like categorizing text or giving the text particular labels. They also tested for things like sarcasm detection and even maths proficiency. The researchers measured not only the reliability of each style of prompt but also whether the response changed frequently or not.

Minor changes

The results show that seemingly minor changes to the prompts we use can have a significant impact on the responses we receive. Every detail counts in shaping how well the model performs, whether it's adding or removing spaces, punctuation, or choosing data formats.

Also, certain prompt tricks, like offering rewards or using specific greetings, showed slight improvements in accuracy. This shows how the design of the prompt can affect how the model behaves.

For instance, specifying a particular format required for the output resulted in a lot of the predictions changing. Indeed, even minor deviations in the prompt can have a significant impact on the predictions. For instance, adding a greeting at the start of the prompt or a thank you at the end influenced the output.

Being civil

This last variation is particularly interesting as while the researchers didn't find that any particular changes were suited to all tasks, some variations resulted in worse accuracy.

Perhaps understandably, offering to "tip" the chatbot didn't make much difference to the output. The researchers noted that introducing statements like "I won't tip, by the way" or "I'm going to tip $1000 for a perfect response!" did not significantly impact response accuracy. However, when experimenting with jailbreaks, even seemingly harmless ones led to notable decreases in accuracy.

The underlying reason remains unclear, although the researchers have formulated some theories. They hypothesized that instances causing the most change are those that are most perplexing to the language model.

To gauge confusion, they examined a specific subset of tasks where human annotators disagreed, suggesting potential confusion. They did find a correlation indicating that confusion in that instance could explain some prediction shifts. Still, it wasn't robust enough on its own, and they acknowledged the presence of other influencing factors.

Training data matters

The researchers believe that these variations are likely to be because of the training data that the models use. For instance, in some forums, it's far more common to use please, thank you, and hello than in others, so these conversational prompts will impact the models trained on this data.

These conversational nuances could significantly influence the learning process of language models. For instance, if greetings frequently precede information on platforms like Quora, a model might prioritize such sources, potentially biasing its responses based on Quora's content related to that specific task. This observation underscores the intricate manner in which the model assimilates and interprets data from diverse online platforms.

A crucial next step for the broader research community involves developing language models that are robust against such variations, consistently providing accurate responses despite formatting changes, perturbations, or jailbreaks.

In the meantime, users of ChatGPT may benefit from making any prompts given to it as simple as possible to ensure you get the best results back.