Summary, Sentiment, Question Answering & More: 5 Creative Tips for GPT-3 Prompt Engineering
Learn how to use a pre-trained GPT-3 model for text summary, sentiment analysis, and more
Created on April 19|Last edited on April 26
Comment
Introduction
GPT-3 likely needs no introduction. It's an incredibly powerful transformer model pioneered by the folks from OpenAI that's most commonly used for text generation. Like its predecessors and many other transformer-based models like BERT, GPT-3 is not just incredibly powerful but incredibly large: we’re talking 175 billion parameters and a training cost of multiple millions of dollars. And while fine-tuning is both available and works really well, GPT-3 can do some really fascinating things out of the box if you get a little creative.
Which is exactly what we’re here to do today.
The bulk of this report is going to be looking at ways we can use for GPT-3 for things other than standard text generation tasks, things like summary and sentiment analysis. Most of what you’ll see here are screenshots from OpenAI’s Playground (which is a UI where you can interact with GPT-3) and specific prompt types. To demonstrate the breadth of the model, we’ll look at a whole bunch of topics, from entertainment to investment to agriculture, medicine, and politics.
In fact, an example of exactly that:

Here, you can see the prompt we imputed into Playground (“My thoughts on [topic], [topic], and [topic]”) and the result (in green).
Let’s get a little creative though:
Prompt 1: GPT-3 Does Sentiment Analysis
Sentiment analysis is a task where we're looking to assign some sentiment to a particular phrase. That sentiment is usually broken down broadly into three categories: positive, negative, and neutral. For example, if our model sees the phrase "This hamburger is terrible," we want it to understand that's negative.
This is usually done with dedicated sentiment models. But did you know that we can get GPT-3 to perform sentiment analysis very similar to what these dedicated models trained for the task can perform by just designing a prompt?

There definitely isn't just one way to prompt engineer GPT-3 to do sentiment analysis, but this explicit description seemed intuitive enough to me and I found it worked quite well. Particularly, it's really impressive how plain language commands can guide the model's predictions. Here's how it does if I modify the sentence:


This could even be considered few-shot learning because in this case we are prompting the model to perform a new task by feeding it very little data - just the prompt itself in plain language.
Prompt 2: A GPT-3 That Disagrees with Your Ideas
This first one is what sparked my interest in prompt engineering for GPT-3 in the first place. I was talking to a really good friend and heard how he was using GPT-3 to disagree with different investment ideas in order to help himself better understand the potential "weak links" in those ideas and better assess them.
We can think of this one as a "GPT-3 to help you think and analyze" use case.
NB: It's really important to keep in mind that GPT-3 was trained on a large corpus of data from the open Internet to help you guide how design a prompt. I like to put myself in the shoes of how some generic article would frame a specific topic. But feel free to experiment. It's definitely as much of an art as it is a science, at this point!
💡
We'll start off by prompting GPT-3 with "The best way to make money this year is to" and it generates an opinion about cryptocurrencies. Then, we prompt it to be a devil's advocate with "However, I disagree with this because" and see how it begins to argue the opposite point of view.

We can also play around with the degree of "emotion" in our prompt. Again, text in black is what we - the user - input and the predictions are in green.

The point here? GPT-3 can help you stress-test opinions and preconceptions, potentially allowing you to reframe how you're thinking about everything from what copy to write to what to invest in to what makes a good first date:

Prompt 3: Text Summarization with GPT-3
Prompt engineering may also help you unlock summarization capabilities of GPT-3. The idea here is pretty simple: in our prompt we put in a passage of text that we want to summarize and then add something like "To summarize: " or "TL;DR".
For example, here I've taken a long plot synopsis of Avengers: End Game from Wikipedia...

...then I pasted it to the OpenAI Playground and gave it the "TL;DR" prompt. These are the results:

Here's the same "TL'DR" for a plot description of The Shawshank Redemption.

Pretty good right?
You could try out other prompts besides "TL;DR" like "in a couple words," "to summarize", "to simplify" and similar.
Prompt 4: GPT-3 and Text Generation
GPT-3 is going to generate something even with no prompt but that's typically going to be something irrelevant and not that useful. GPT-3, boasting its 175 billion parameters, can generate text around all sorts of topics.
Here, I would say, mentioned explicitly what you want GPT-3 to generate the text about and, as with all of the other ones try to write as natively as possible.
This one's a dialogue example:

At this point you can start the next prompt below this text with a "B:" and steer the dialogue in the direction you want!
You can also ask it to list things. The possibilities are endless here.

Additionally? All this really highlights how extensive the training data was for GPT-3. I was surprised how many topics it produced smart, relevant copy for.
Prompt 5: Question-Answering with GPT-3
Similarly, GPT-3 does a solid job answering questions on a large variety of topics. For example:

However, while it can do a pretty good job at answering many questions on "timeless" topics, it can suffer certain pitfalls when it comes to the most relevant, up to date information.
For example, here's GPT-3's take on who's the current president of the US in 2022:

Why is this exactly? This type of prediction isn't a problem with GPT-3 specifically - it's a transformer NLP model, not Google, after all! Essentially, we want to understand that with models as large as GPT-3, they simply can't be retrained all the time and there's a natural lag between when today and when the model was last trained. So news stories and current events are not exactly its forte.
So, what do if we want GPT-3 to answer questions but with new, more relevant information? Prompt engineering can help here too.
If we include in the prompt an extract from Wikipedia about who the current US president is - Joe Biden - we can have GPT-3 answer questions directly related to the text that fed into the it as a prompt:

And, just like that, with addition of some new, relevant information in our prompt, GPT-3 is able to now answer questions that weren't in the info we included in the prompt!

GPT-3 Fine-Tuning
The completions above are all from GPT-3 straight out of the box. But when you want GPT-3 to generate something even better for a specific domain, fine-tuning is a great option. GPT-3 performs really well here, sometimes on just a few hundred training rows. In fact, we've written about this in the past!
- I wrote a practical guide on how we can fine-tune GPT-3 to generate sci-fi, namely new Doctor Who episode synopses
- Here's a fun April Fools blog post where I fine-tuned GPT-3 to generate technical machine learning blog posts. Yes, exactly like the one you're reading right now!
Conclusion
We've seen before how flexible GPT-3 can be and interacting with it simply on a prompt engineering level (without additional fine-tunes) is just further proof of how powerful this model is. We hope this blog inspires you to take GPT-3 for a spin and to be creative with what you ask it do.
After all, sometimes all you have to do ask. You might be surprised what GPT-3's answer is.
Add a comment
Iterate on AI agents and models faster. Try Weights & Biases today.