[{"content":" TLDR: Be specific, not short. Use delimiters, ask for structured output, give few-shot examples, and tell the model to think step-by-step. Never expect a perfect prompt on the first try — iterate. When summarising, specify the audience. Temperature controls creativity vs. predictability.\nI recently went through ChatGPT Prompt Engineering for Developers by Andrew Ng and Isa Fulford on DeepLearning.AI. It\u0026rsquo;s a short course — about 90 minutes — but it packs in a surprising number of practical insights that changed how I interact with LLMs day-to-day.\nHere are the things that stuck with me.\nClear Does Not Mean Short This is the single most important takeaway. When people say \u0026ldquo;write a clear prompt,\u0026rdquo; the instinct is to make it shorter. That\u0026rsquo;s wrong. Clear means precise, not brief.\nYou cannot let the LLM play guessing games. Vague instructions force the model into a best-effort mode where it fills in the gaps with assumptions — and those assumptions are often wrong.\nA longer, highly specific prompt will almost always outperform a short, ambiguous one.\nPractical Guidelines The course lays out a set of concrete techniques. These are the ones I keep coming back to:\nUse Delimiters Wrap the text you want the model to operate on in clear delimiters — triple backticks, XML tags, whatever works. This does two things:\nIt makes the boundary between instruction and data unambiguous. It acts as a defense against prompt injection — if user-supplied text contains instructions, the delimiters help the model distinguish them from your actual prompt. Ask for Structured Output If you want JSON, say so. If you want a numbered list, say so. Don\u0026rsquo;t leave the output format to chance.\nCite Examples (Few-Shot Prompting) Give the model one or two examples of the input-output format you expect. This is one of the simplest and most effective techniques — the model picks up on the pattern immediately.\nGive the Model Time to Think This one is subtle but powerful. Instead of asking the model to jump straight to an answer, ask it to work through the problem step by step.\nThis works because the model is literally following instructions — if you tell it to reason first and answer second, it will produce intermediate reasoning that leads to better final answers.\nVerify Before Judging A specific application of the above: instead of saying \u0026ldquo;Here\u0026rsquo;s my solution, is it correct?\u0026rdquo;, say \u0026ldquo;Work out your own solution first, then compare it to mine.\u0026rdquo;\nWhy? Because if you just ask the model to evaluate your work, it will skim through it. Individual lines might look correct, but the overall logic might be broken. If the model derives its own answer independently, it\u0026rsquo;s far more likely to catch errors.\nCite Sources and Quotes Exactly When asking the model to pull information from a source, ask it to quote the relevant passages verbatim. This grounds the response in the actual text rather than a hallucinated paraphrase.\nPrompt Development Is Iterative If there\u0026rsquo;s one thing that humbled me, it\u0026rsquo;s this: no one writes a perfect prompt on the first try.\nAndrew Ng — the person who has been teaching AI for years — said plainly that he has never met a single person who can write a good prompt in one shot. If he\u0026rsquo;s humble enough to say that, I\u0026rsquo;m inclined to believe him.\nThe process is:\nStart with a prompt. Evaluate whether the output matches what you want. Analyse the errors — what\u0026rsquo;s wrong? Is it too vague? Too long? Missing context? Refine the prompt and repeat. All the \u0026ldquo;perfect prompt templates\u0026rdquo; floating around on the internet might be useful as starting points, but for your specific use case, you must always iterate.\nSummarisation With Intent One of the most practical applications covered in the course is text summarisation. The key insight: don\u0026rsquo;t just say \u0026ldquo;summarise this.\u0026rdquo;\nYou need to provide the lens through which you want the summary written. Who is the audience? What do they care about?\nFor example, if I\u0026rsquo;m in a technical domain, I want the summary to surface the important technical details — not a generic overview. The way you do this is by specifying the audience or the focus area in the prompt.\nA prompt I\u0026rsquo;ve started using myself follows this pattern: \u0026ldquo;Summarise the following for a senior backend engineer, focusing on architectural decisions and trade-offs.\u0026rdquo;\nThe Temperature Parameter The temperature setting controls randomness in the model\u0026rsquo;s output:\nLower temperature (e.g., 0.0 – 0.3) → more deterministic, more predictable. Use this for tasks where you want consistency — data extraction, classification, code generation. Higher temperature (e.g., 0.7 – 1.0) → more random, more creative. Use this for brainstorming, creative writing, or generating diverse alternatives. The mental model: higher temperature means the model is more willing to take risks with its word choices, which can lead to either creative brilliance or total nonsense.\nThis post is based on my notes from ChatGPT Prompt Engineering for Developers on DeepLearning.AI. If you write code and interact with LLMs regularly, it\u0026rsquo;s worth the 90 minutes.\n","permalink":"https://blog.vijay.codes/posts/prompt-engineering-learnings/","summary":"Key takeaways from DeepLearning.AI\u0026rsquo;s prompt engineering course — why clarity beats brevity, why iteration beats perfection, and practical techniques that actually work.","title":"What I Learned From ChatGPT Prompt Engineering for Developers"}]