Prompt Engineering Best Practices Guide

Please create a comprehensive, up-to-date list of best practices for prompt engineering for large language models such as GPT, Claude, and Gemini. Include clear, actionable tips supported by the latest reputable sources. Organize the tips in a numbered list and cover both beginner-friendly and advanced techniques. For each tip, include a brief rationale explaining why it works and provide a concise example to illustrate it.

You·today at 12:07 am

Prompt engineering is the art of crafting inputs that guide large language models (LLMs) like GPT, Claude, or Google’s Gemini to produce the desired output. Below are 12 best practices – ranging from beginner-friendly tips to advanced techniques – each with a rationale and a short example:

  1. Use the Most Suitable Model or Version – For best results, choose the latest and most capable language model available for your task. Newer models (e.g., GPT-4, Claude 2) tend to follow complex instructions more reliably than older versions (help.openai.com). This isn’t a prompt wording trick per se, but a foundational step: a stronger model often requires less prompt tweaking to get high-quality output. Example: If both GPT-3.5 and GPT-4 are options, use GPT-4 for a complex analysis prompt – OpenAI notes that newer models are generally easier to prompt engineer effectively (help.openai.com).

  2. Be Clear and Specific with Instructions – Write prompts in a clear, unambiguous manner so the model knows exactly what to do. Avoid vague language or open-ended requests; instead, describe the task in detail (including the desired outcome, format, length, style, etc.) (docs.anthropic.com) (help.openai.com). Think of the AI as a very literal-minded assistant: it performs better when your directions leave little room for misunderstanding. Example: Instead of asking Tell me about renewable energy., prompt with Explain the benefits and drawbacks of solar and wind energy for a general audience in 2–3 paragraphs. – this specificity guides the model on what to cover, how to style the answer, and how long it should be.

  3. Provide Relevant Context or Background – Include any necessary context, background facts, or assumptions in your prompt so the model isn’t guessing missing details. LLMs have no memory of your conversation or domain specifics beyond what you provide each time, so set the stage clearly (docs.anthropic.com). Describe the situation, the audience, or the purpose of the output if it matters. Providing such context helps the model tailor its answer appropriately and prevents generic or incorrect responses (cloud.google.com). Example: If you want deployment advice, start with context like: “I’m building a small web app using Python Flask on Google Cloud with a MySQL database.” Then ask: “How can I deploy it using Cloud Run?” This way, the model’s answer will be specific to your technology stack and goal.

  4. Structure Your Prompt and Use Delimiters – Organize the prompt so that instructions, context, and examples are clearly separated. Models respond better when the prompt is well-structured and unambiguous (help.openai.com). You can use delimiters like triple quotes (""") or markdown code fences to isolate any input data or long text from the actual instructions (help.openai.com). In more complex prompts, consider using labels or even XML-like tags for different sections (e.g., <instructions>…</instructions> and <context>…</context>) to prevent the model from mixing them up (docs.anthropic.com). Example: In a summarization prompt, you might write:

    Summarize the following passage in one paragraph.
    Text: """ [Paste the passage here] """
    (This clearly delineates the instruction from the text to summarize.)

  5. Demonstrate the Desired Output Format – If you need the answer in a certain format (a list, JSON, a table, etc.), show or describe that format in your prompt. Providing an example output or an explicit template makes it much easier for the model to comply (help.openai.com). Rather than just saying “Give me an output in format X,” illustrate it. This “show and tell” approach removes ambiguity about how the response should look. Example: If you want a JSON output, you could prompt: “Extract the following info as JSON with keys Name and Age. Text: Alice is 30 years old. The model is then likely to respond with an exact JSON object, e.g. { "Name": "Alice", "Age": 30 }, because you explicitly requested that format.

  6. Include Examples for Few-Shot Prompting – Providing a few examples of the task (with inputs and desired outputs) in your prompt can greatly improve the model’s performance (docs.anthropic.com). These examples act as demonstrations of what you expect, helping the LLM understand the pattern or format you want. Few-shot prompting is especially useful for enforcing a specific style or structured output consistently (docs.anthropic.com) (docs.anthropic.com). Aim for a handful of representative examples (say, 2–5) that cover typical scenarios or edge cases. Example: To get an AI to classify sentiment, you might first give a couple of labeled examples, for instance:

    • Input: “I love this product!” → Sentiment: Positive
    • Input: “The service was terrible.” → Sentiment: Negative
      Then provide your actual Input for classification. By seeing these examples, the model is primed to output the correct sentiment for the new input.
  7. Break Down Complex Tasks – Don’t overload the model with too many objectives at once. If a query is complex or multi-faceted, break it into smaller, more manageable pieces. You can either prompt step-by-step within a single prompt (by explicitly instructing the model through an ordered list of sub-tasks), or handle it in multiple prompts/turns (complete one subtask, then feed its output into the next prompt). This divide-and-conquer approach makes it easier for the model to focus and reduces errors (cloud.google.com). Example: Rather than asking in one go, “Draft a business plan for a new product and also create three marketing slogans for it,” you could first prompt, “Draft a one-page business plan for a new [product details].” Once you get the plan, follow up with, “Great. Now based on that plan, give me three catchy marketing slogans for the product.” Splitting the task ensures each part gets the model’s full attention.

  8. Encourage Step-by-Step Reasoning – For problems requiring reasoning, math, or complex logic, explicitly prompt the model to work through the solution step by step. This technique, known as chain-of-thought prompting, often improves accuracy on challenging tasks (docs.anthropic.com). By giving the model “permission” to show its reasoning process, you help it break the problem into smaller steps and reduce oversight errors (docs.anthropic.com). You can simply say “Let’s think this through step by step,” or instruct the model to first outline its reasoning then give an answer. This leads to more coherent and correct results (though responses will be longer due to the added reasoning). Example: “Calculate the total sales for each quarter. First, list the sales of each month from the data, then sum them by quarter, and finally give the overall total.” – Here, the prompt guides the model to produce intermediate calculations (monthly lists and quarterly sums) before the final answer, helping ensure nothing is skipped.

  9. Assign a Role or Persona to the Model – Prompt the model to “act as” a certain expert or character relevant to your task. This provides implicit context and can yield more specialized, focused responses (docs.anthropic.com). Role prompting (via a dedicated system message or an initial instruction like “You are a …”) helps the model adopt the tone, terminology, and perspective of that role (docs.anthropic.com). For example, telling the model it is a helpful tutor, a harsh critic, or a domain expert will influence the style and depth of its answer. Example: For instance, in a chat setting you can specify:

    System: You are a cybersecurity expert specializing in network security.  
    User: Explain the top vulnerabilities in IoT devices.  
    

    Here the system role primes the model to respond like a security expert, so the answer will be more domain-specific and accurate to that role.

  10. Prefer Positive Phrasing and Precision Over Negatives – Instead of only telling the model what not to do, guide it by stating what it should do. Instructions framed in positive, specific terms are easier for the model to follow correctly (help.openai.com). Likewise, eliminate imprecise qualifiers (“a bit,” “not too much”) and use concrete terms or limits so the model isn’t left guessing (help.openai.com). This reduces the chance of the model doing the forbidden thing or producing an overly broad output. Example: Rather than saying “Don’t be too verbose,” say “Give a brief response of 3–4 sentences.” And instead of “Do not ask for personal data,” you might write “Focus only on general, non-personal information.” In each case, the latter instruction is clearer about the desired behavior.

  11. Provide Reference Text or Data for Accuracy – Large models have knowledge cutoffs and may hallucinate details. If your query depends on specific or up-to-date information, consider supplying a relevant excerpt or data snippet in the prompt itself (malith-dulsara.hyvorblogs.io). By giving the model concrete reference material, you ground its response in facts rather than forcing it to guess. Similarly, if the platform allows it, you can use tools or retrieval techniques (e.g. feeding results from a web search or database) to equip the model with external knowledge or perform computations (malith-dulsara.hyvorblogs.io). Augmenting prompts with factual context greatly improves reliability and truthfulness. Example: If you want an analysis of a recent event, include a quick summary of that event in your prompt. For instance:

    Background: On Oct 2024, Company X released a new product, generating significant market buzz.
    Question: Given this information, what are the potential impacts on the industry?
    (By providing the background, the model has current facts to base its answer on, instead of relying on outdated training data.)

  12. Iterate and Refine the Prompt – Treat prompt design as an iterative process. Don’t expect perfection on the first try – experiment with rephrasing, adding details, or adjusting your approach based on the model’s output. Test variations methodically (changing one element at a time) to see how it affects the response (malith-dulsara.hyvorblogs.io). By analyzing the model’s mistakes or irrelevant answers, you can gradually hone your prompt to get better results. (Advanced users even use one model to critique or refine prompts for another model.) Example: If the initial output is too vague, revise your prompt to explicitly request “three specific examples” or to “use a formal tone,” then try again. Each iteration teaches you what the model needs, and after a few rounds of tweaking, you’ll converge on a prompt that reliably yields the desired outcome.

o3 Deep Research·today at 12:16 am