TLDR: Be specific, not short. Use delimiters, ask for structured output, give few-shot examples, and tell the model to think step-by-step. Never expect a perfect prompt on the first try — iterate. When summarising, specify the audience. Temperature controls creativity vs. predictability.

I recently went through ChatGPT Prompt Engineering for Developers by Andrew Ng and Isa Fulford on DeepLearning.AI. It’s a short course — about 90 minutes — but it packs in a surprising number of practical insights that changed how I interact with LLMs day-to-day.

Here are the things that stuck with me.

Clear Does Not Mean Short

This is the single most important takeaway. When people say “write a clear prompt,” the instinct is to make it shorter. That’s wrong. Clear means precise, not brief.

You cannot let the LLM play guessing games. Vague instructions force the model into a best-effort mode where it fills in the gaps with assumptions — and those assumptions are often wrong.

A longer, highly specific prompt will almost always outperform a short, ambiguous one.

Practical Guidelines

The course lays out a set of concrete techniques. These are the ones I keep coming back to:

Use Delimiters

Wrap the text you want the model to operate on in clear delimiters — triple backticks, XML tags, whatever works. This does two things:

  1. It makes the boundary between instruction and data unambiguous.
  2. It acts as a defense against prompt injection — if user-supplied text contains instructions, the delimiters help the model distinguish them from your actual prompt.

Ask for Structured Output

If you want JSON, say so. If you want a numbered list, say so. Don’t leave the output format to chance.

Cite Examples (Few-Shot Prompting)

Give the model one or two examples of the input-output format you expect. This is one of the simplest and most effective techniques — the model picks up on the pattern immediately.

Give the Model Time to Think

This one is subtle but powerful. Instead of asking the model to jump straight to an answer, ask it to work through the problem step by step.

This works because the model is literally following instructions — if you tell it to reason first and answer second, it will produce intermediate reasoning that leads to better final answers.

Verify Before Judging

A specific application of the above: instead of saying “Here’s my solution, is it correct?”, say “Work out your own solution first, then compare it to mine.”

Why? Because if you just ask the model to evaluate your work, it will skim through it. Individual lines might look correct, but the overall logic might be broken. If the model derives its own answer independently, it’s far more likely to catch errors.

Cite Sources and Quotes Exactly

When asking the model to pull information from a source, ask it to quote the relevant passages verbatim. This grounds the response in the actual text rather than a hallucinated paraphrase.

Prompt Development Is Iterative

If there’s one thing that humbled me, it’s this: no one writes a perfect prompt on the first try.

Andrew Ng — the person who has been teaching AI for years — said plainly that he has never met a single person who can write a good prompt in one shot. If he’s humble enough to say that, I’m inclined to believe him.

The process is:

  1. Start with a prompt.
  2. Evaluate whether the output matches what you want.
  3. Analyse the errors — what’s wrong? Is it too vague? Too long? Missing context?
  4. Refine the prompt and repeat.

All the “perfect prompt templates” floating around on the internet might be useful as starting points, but for your specific use case, you must always iterate.

Summarisation With Intent

One of the most practical applications covered in the course is text summarisation. The key insight: don’t just say “summarise this.”

You need to provide the lens through which you want the summary written. Who is the audience? What do they care about?

For example, if I’m in a technical domain, I want the summary to surface the important technical details — not a generic overview. The way you do this is by specifying the audience or the focus area in the prompt.

A prompt I’ve started using myself follows this pattern: “Summarise the following for a senior backend engineer, focusing on architectural decisions and trade-offs.”

The Temperature Parameter

The temperature setting controls randomness in the model’s output:

  • Lower temperature (e.g., 0.00.3) → more deterministic, more predictable. Use this for tasks where you want consistency — data extraction, classification, code generation.
  • Higher temperature (e.g., 0.71.0) → more random, more creative. Use this for brainstorming, creative writing, or generating diverse alternatives.

The mental model: higher temperature means the model is more willing to take risks with its word choices, which can lead to either creative brilliance or total nonsense.


This post is based on my notes from ChatGPT Prompt Engineering for Developers on DeepLearning.AI. If you write code and interact with LLMs regularly, it’s worth the 90 minutes.