26 Tips for Prompt Engineering at Every Model Size
Get the most out of your LLM without any fine tuning!
Created on January 3|Last edited on June 27
Comment
In the world of artificial intelligence, the ability to effectively communicate with large language models like ChatGPT is crucial. The study "Principled Instructions Are All You Need for Questioning LLaMA-1/2, GPT-3.5/4" by Sondos Mahmoud Bsharat, Aidar Myrzakhan, Zhiqiang Shen, and the VILA Lab at Mohamed bin Zayed University of AI offers groundbreaking insights into this.
Presenting 26 guiding principles for prompt engineering, this work aims to enhance interactions with LLMs for both developers and general users. By refining the way prompts are crafted, the study, backed by extensive experiments on models such as LLaMA-1/2 and GPT-3.5/4, highlights an approach that eschews direct model fine-tuning in favor of optimized communication. This post will delve into these principles, revealing how they can revolutionize our engagements with advanced AI systems.
26 Tips
These prompt engineering tips are guidelines for creating effective prompts when interacting with a Language Model like ChatGPT. They fall into various categories: prompt structure, clarity, specificity, user interaction, content and language style, and complex tasks.
Here's an brief explanation of each of these principles:
Direct Communication: Avoid using polite phrases like “please” or “thank you,” as these are unnecessary.
Audience Integration: Tailor the prompt, considering the audience's expertise level.
Breaking Down Tasks: Simplify complex tasks into smaller, more manageable prompts.
Affirmative Directives: Use positive language like 'do' instead of negative language like 'don’t'.
Clarity in Explanation: Request explanations in simple terms, suitable for a beginner or a young person.
Incentivizing Quality: Suggesting a tip for better solutions.
Example-Driven Prompting: Use few-shot prompting with examples.
Structured Formatting: Begin prompts with ‘###Instruction###’, followed by ‘###Example###’ or ‘###Question###’ if relevant, separated by line breaks.
Direct Task Assignment: Use phrases like “Your task is” and “You MUST”.
Penalty Notification: State that there will be penalties for certain actions or responses.
Natural Language Response: Request answers in a natural, human-like manner.
Leading Words: Guide the response with phrases like “think step by step”.
Unbiased Responses: Ensure answers are unbiased and stereotype-free.
Interactive Engagement: Allow the model to ask clarifying questions.
Learning with Testing: Request teaching on a topic with a test at the end, without providing the answers immediately.
Role Assignment: Assign a specific role to the language model.
Use of Delimiters: Use delimiters to structure prompts effectively.
Repetition for Emphasis: Repeat specific words or phrases for emphasis.
Chain-of-Thought with Examples: Combine Chain-of-Thought reasoning with few-shot prompts.
Output Primers: End your prompt with the start of the expected response.
Detailed Text Requests: Ask for detailed essays or texts on specific topics.
Text Revision: Request grammar and vocabulary improvements without style alterations.
Complex Coding Tasks: For coding spanning multiple files, request scripts to create or modify files accordingly.
Continuation Prompts: Continue or finish a provided beginning of text.
Clear Requirements: State explicit requirements for content creation.
Imitating Style: Request text similar in style to a provided sample.
These principles are designed to maximize the effectiveness and clarity of prompts, leading to more accurate and useful responses from the model.


The above chart illustrates the improvement in response quality of large language models when the introduced principles are applied to prompts. It categorizes LLMs based on their size: "small-scale" refers to 7B models, "medium-scale" to 13B models, and "large-scale" to the larger 70B and GPT-3.5/4 models. The chart likely shows a comparative analysis, demonstrating how each scale of LLMs benefits from these principles, with a notable enhancement in the quality of responses across all model sizes.
Add a comment
Tags: ML News
Iterate on AI agents and models faster. Try Weights & Biases today.