Skip to main content

Unlocking the Full Potential of GPTs: OpenAI Shares Practical Strategies and Tactics

Is prompt engineering just another language we all need to learn?
Created on June 5|Last edited on June 6
When engaging with GPT models, it's essential to appreciate the importance of effective prompting, a factor that holds true regardless of the user's level of AI expertise.
OpenAI's recent guide emphasizes this crucial point, highlighting the need for precise and clear communication when interacting with these advanced AI models. These models, despite their impressive abilities, are not capable of reading our minds or discerning the nuances of our intent without clear guidance. They rely solely on the information we provide them in the form of prompts.
Every piece of text that we feed into the model acts as a compass, guiding it toward our intended response. A well-crafted prompt can make the difference between a valuable, relevant response and one that misses the mark. And as we continue to explore the capabilities of these models, understanding this aspect becomes even more crucial. Here are some tips from the OpenAI team that can help you use GPTs more effectively!

Be Specific

It can be helpful to ask the GPT your question directly instead of removing details. Here is an example.



Personas

Adopting a persona for the model can also yield useful results. Want it to sound like Shakespeare or mimic a popular author's style? Just ask!



Delimiters

And if the task you're assigning is complex, consider using delimiters to mark distinct parts clearly.


Steps

For detailed tasks, specifying the required steps can make it easier for the model to complete them. Similarly, providing examples can greatly help, especially when you want the model to adopt a particular style or format.



Response Length Specification

If you have a preferred length for the output, don't hesitate to specify it. Whether you want it in terms of word count, sentences, paragraphs or bullet points, clear instructions will steer the model in the right direction.



Reference Texts

Another noteworthy strategy is to provide reference texts. You can instruct the model to answer using the given information or even cite passages from the provided text. This way, you ensure the relevant data backs up the model’s responses.


Ask for Citations

Another practical tactic when dealing with GPTs is directing them to include citations from a supplied reference text in their responses. By doing so, one can ensure the information's source and the accuracy of the citation can be programmatically confirmed through string matching within the given document.



Take it Slow

Finally, one last tip: take it slow. Like us, GPTs perform better when they're not rushed. A well-thought-out answer is often worth the wait. You can instruct the model to work out its own solution, use an inner monologue to guide its reasoning process or ask the model if it missed anything on previous passes.



Prompts Are Here to Stay? 

As language model capabilities continue to evolve, the significance of prompt engineering may alter but is unlikely to fade entirely. It's akin to the art of effective communication, where choosing the right words can often make the difference between being understood or not.
While improvements in AI might streamline interactions, the essence of tailoring a prompt to achieve the best response or the act of "prompt engineering" is deeply rooted in the mechanism of how we communicate, which is a hard principle to circumvent.

Shortcutting the Prompting Process

Looking at projects like Elon Musk's Neuralink, we can envision a future where direct brain-computer interfaces might reduce the need for explicit verbal or written prompts. However, this technology is still in its early stages, and even when it matures, it's still non-trivial to ask the right questions and give the right instructions. However, the technology could shortcut the need for providing excessive amounts of detail for the task.

"Language Engineering"

It's also interesting to note that humans aren't immune to this kind of 'prompt engineering.' Our responses can be significantly influenced by how a question or request is framed.
A classic example is the power of the word 'because.' Studies have shown that people are more likely to comply with a request if a reason is provided, even if the reason isn't particularly strong. So, just as with AI, the way we shape our prompts in human interactions can greatly impact the responses we get.

Prompting is Communicating

In conclusion, the importance of prompt engineering, or effective communication, is unlikely to diminish as AI technology progresses. Instead, the techniques and strategies may evolve, but the core principle will remain critical - asking the right question in the right way will always be key to obtaining the best possible answer.
Tags: ML News
Iterate on AI agents and models faster. Try Weights & Biases today.