How to Write AI Prompts: The Complete Guide

Most people use AI wrong — not because they're不懂技术, but because they don't know how to ask. The difference between a vague prompt and a precise one can be the difference between a useless wall of text and exactly what you needed.

This guide covers everything you need to write effective AI prompts: the core principles that work across every model, five proven prompt patterns with real examples, model-specific tips for ChatGPT, Claude, and Gemini, and the mistakes most people make. By the end, you'll know why your prompts underperform and how to fix them.

Build Better Prompts in Minutes

Use the SnapUtils AI Prompt Builder to structure prompts with real-time quality scoring, 15 starter templates, and 6 enhancement patterns. 100% free, no login required.

Open AI Prompt Builder →

What Is an AI Prompt?

An AI prompt is the input you give to a large language model (LLM) to request a response. A prompt can be a single sentence, a multi-paragraph instruction, or a structured conversation with multiple exchanges. The term "prompt engineering" refers to the practice of writing inputs that reliably produce the desired output — not just any output, but one that actually solves your problem.

The key insight: LLMs don't guess what you want — they predict what text should come next based on everything you've told them. Your prompt is that context. The better your context, the more accurate the prediction.

The 6 Core Principles of Effective AI Prompts

Before diving into patterns and examples, internalize these six principles. Every great prompt follows them:

1. Be Specific, Not Vague

"Write something" produces nothing useful. "Write a 150-word product description for a B2B SaaS tool targeting engineering managers, focusing on time savings, with a confident tone" produces something you can work with. The more specific you are about format, length, audience, and tone, the less editing you'll need to do afterward.

2. Give the AI a Role or Persona

Telling the AI who to be dramatically changes its output. "Act as a senior software engineer with 15 years of experience reviewing pull requests" produces different advice than "act as a junior developer." Role-setting focuses the model's knowledge and calibrates its vocabulary and assumptions.

3. Define the Output Format Upfront

Tell the AI how you want the response structured before it generates it. "Format as a table with columns: Name, Strengths, Weaknesses, Recommended Use Case" is far more useful than "give me a comparison." The model will format it correctly on the first pass rather than forcing you to re-prompt.

4. Provide Context — Don't Assume Prior Knowledge

LLMs have no memory of previous conversations (unless you include the history). Every new conversation starts blank. If your request depends on context — your industry, your product, your audience — include it in the prompt. The AI can't read your mind, and it won't ask follow-up questions unless you train it to.

5. Use Constraints to Focus the Output

Constraints are more powerful than instructions. "Write 200 words" is more useful than "write a short description." "Use simple language for non-technical readers" is more actionable than "don't be too technical." Constraints cut off irrelevant directions and force relevance.

6. Chain Complex Tasks into Steps

A single prompt attempting to cover a complex multi-step task produces a shallow response. Breaking it into sequential prompts — where each step's output feeds into the next — produces far better results. This is the chain-of-thought approach, and it's one of the most reliable patterns for complex reasoning tasks.

The 5 Proven Prompt Patterns

These five patterns cover the vast majority of practical use cases. Each has been tested across thousands of real-world prompts and consistently produces better results than unstructured prompting.

Pattern 1: The Role + Task + Format (RTF) Pattern

The RTF pattern is the workhorse of effective prompting. It assigns a persona, specifies the task, and demands a particular format — all in one prompt.

Most versatile — works for any task

Format

Act as [ROLE]. [TASK]. Format as [OUTPUT FORMAT].

❌ Weak prompt
Write some code.
✅ RTF prompt
Act as a senior Python developer with expertise in FastAPI. Write a REST API endpoint for user registration with email validation, password hashing using bcrypt, and a PostgreSQL database insert. Return the full route handler code with type hints and error handling. Format as a single, production-ready Python file.

This pattern is particularly effective because the role instruction activates relevant knowledge clusters in the model's training data. "Senior Python developer" brings different assumptions, vocabulary, and code quality expectations than "beginner coder."

Pattern 2: The Few-Shot Example Pattern

Instead of describing what you want in abstract terms, show examples of the exact input-output behavior you expect. This is called few-shot learning — providing 2–5 examples that demonstrate the pattern.

Best for formatting, classification, extraction

Format

Here are examples of [TASK]: [EXAMPLES]. Now do the same for: [NEW INPUT].

❌ Weak prompt
Categorize this customer feedback as positive, negative, or neutral.
✅ Few-shot prompt
Categorize the following customer feedback as POSITIVE, NEGATIVE, or NEUTRAL. Here are examples: Input: "Best purchase I've made in years. Works exactly as described." → NEGATIVE Input: "Shipping was fine, product arrived." → NEUTRAL Input: "Totally floored by how fast this is. Worth every penny." → POSITIVE Now categorize this: Input: "Took 3 weeks to arrive and the package was damaged. Very disappointed."

Few-shot prompting is dramatically more effective than zero-shot for tasks where the output style is specific or non-obvious. It removes ambiguity about what "positive" or "negative" means in your specific context.

Pattern 3: The Chain-of-Thought (CoT) Pattern

Ask the model to reason step by step. This pattern is surprisingly powerful — it routinely improves accuracy on complex reasoning tasks by 30–50%. You can use it by simply appending "think through this step by step" or "let's think about this carefully."

Best for analysis, math, complex decisions

Format

[COMPLEX TASK]. Think through this step by step, showing your reasoning for each step.

Example — complex decision
I'm deciding between three candidates for a senior marketing role:

Candidate A: 10 years experience, strong portfolio, but salary ask is 20% above budget.
Candidate B: 5 years experience, excellent references, salary ask is at budget.
Candidate C: 3 years experience, fastest growth trajectory, salary ask is 10% below budget.

Think through this step by step, weighing the tradeoffs of experience vs. budget vs. growth potential. Recommend one candidate with clear reasoning.

The chain-of-thought pattern works because it forces the model to externalize intermediate reasoning steps — catching errors in logic before they compound into a wrong final answer. For math, code debugging, and strategic decisions, this is the single highest-impact addition you can make to any prompt.

Pattern 4: The Constrained Output Pattern

When you need machine-readable or highly structured output, constrain the format explicitly. This pattern is essential when the output feeds into downstream automation, a spreadsheet, or a database.

Best for automation, data extraction, structured data

Format

[TASK]. Return output as [CONSTRAINED FORMAT, e.g., JSON, CSV, table]. No additional text.

Example — structured JSON output
Extract the key information from the following job posting and return it as a JSON object with these exact fields: job_title (string), company (string), location (string), remote (boolean), salary_min (number or null), salary_max (number or null), required_skills (array of strings), apply_url (string). No additional text — JSON only. Job posting: "We are hiring a Senior Frontend Engineer to join our distributed team. This role is fully remote worldwide, salary $140,000–$170,000 annually. You need 5+ years with React and TypeScript. Apply at careers.ourcompany.com/fe-senior"

The constraint "No additional text — JSON only" is critical. Without it, most models will wrap the JSON in markdown code fences or add explanatory text that breaks parsers.

Pattern 5: The Self-Critique / Iterative Improvement Pattern

Ask the AI to generate a response, then critique it against your criteria, then revise. This two-step pattern produces significantly higher-quality output than asking for the best version in one pass.

Best for high-stakes content: writing, strategy, code

Format

[TASK]. Then review your response against these criteria: [CRITERIA]. Revise anything that doesn't meet them.

Example — content with self-critique
Write a 300-word product launch email for our new project management tool, targeting small business owners who have never used dedicated PM software before. The tone should be encouraging but not condescending. Then review your draft against these criteria: (1) Is the value proposition clear within the first 50 words? (2) Does it avoid jargon that a non-technical reader wouldn't know? (3) Is there a clear, single call-to-action? Revise any section that fails these checks.

15 Real AI Prompt Examples You Can Copy Today

Patterns are useful, but examples are faster. Here are 15 working prompts organized by use case — copy them directly or adapt them to your situation.

Code & Development

1. Code Review
Act as a senior software engineer. Review the following code for potential bugs, performance issues, security vulnerabilities, and violations of clean code principles. For each issue found, provide the file, line number (if identifiable), severity (critical/high/medium/low), and a specific fix. Code: [paste code here]
2. Debug Assistant
I'm getting this error: [paste error message]. The relevant code is in [file/module]. Walk me through what this error means, what likely causes it, and how to fix it — in plain language. Then show me the corrected code.
3. SQL Generation
Given the following database schema: [describe tables and columns]. Write a SQL query that [describe what you want]. Explain the query logic step by step and flag any performance considerations.

Writing & Content

4. Blog Post Draft
Write a 1,200-word blog post on [topic]. Target audience: [description]. Include a compelling hook in the first 50 words, 3–4 substantive sections, and a call-to-action at the end. Tone: [formal/conversational/technical]. Optimize for SEO using the keyword: [keyword].
5. Email — Cold Outreach
Write a cold outreach email to [prospect description] at [company]. The goal is [specific goal: book a call, get a referral, etc.]. The hook is [their pain point or your unique angle]. Tone: professional, direct, not pushy. Max 150 words. Include a subject line and a single clear CTA.
6. Content Repurposing
Take the following article/podcast transcript and create: (1) A Twitter/X thread of 5–7 punchy tweets with relevant hashtags. (2) A LinkedIn post formatted as an insightful long-form update. (3) A one-paragraph email summary for my newsletter. (4) Two potential hooks for a YouTube video or podcast episode. Source: [paste content here]
7. Technical Documentation
Write technical documentation for [feature/API/function]. Audience: [developers/non-technical users/etc.]. Include: overview, prerequisites, step-by-step setup instructions with code examples, common pitfalls and how to avoid them, and a FAQ section with 4–5 questions a real user would ask. Format in markdown.

Analysis & Decision-Making

8. Competitive Analysis
Conduct a competitive analysis for [your product/service]. Compare it against [competitor A], [competitor B], and [competitor C] across these dimensions: features, pricing, target audience, strengths, weaknesses, and market positioning. Format as a comparison table, then summarize the key strategic implications for [your company].
9. Pros and Cons Decision Framework
Help me decide whether to [major decision: e.g., switch jobs, buy new software, change vendors]. Give me a structured pros/cons analysis with at least 5 specific pros and 5 specific cons. For each point, note the weight (how significant is this factor?) and the evidence or assumption it relies on. Then give a final recommendation with your reasoning.
10. Meeting Agenda Builder
Create a meeting agenda for a [type] meeting with [N] attendees. The meeting goal is [goal]. Attendees are: [role/title descriptions]. The meeting is [duration]. Include: pre-work for attendees, agenda items with time allocations, the decision that needs to come out of each section, and suggested facilitation prompts for the hardest conversations.

Learning & Teaching

11. Concept Explainer
Explain [concept] to me as if I'm a [audience: e.g., 12-year-old, non-technical manager, new developer]. Start with the analogy/metaphor that best captures it, then build to the full definition. Use simple language. Include one real-world example of how [concept] is used.
12. Study Guide Generator
Create a study guide for [subject/topic]. Format as: (1) A concept map showing how the main ideas connect. (2) A list of key terms with definitions. (3) 10 practice questions (mix of recall, application, and analysis). (4) The 3 most important things to remember. (5) Common misconceptions about this topic.

Data & Research

13. Data Summary
Analyze the following dataset and summarize the key findings in plain language. Identify the 3 most important trends, the most surprising data point, and the most significant outlier. Also flag any data quality issues. Then suggest 3 follow-up questions this data raises that you would investigate next. Data: [paste data here]
14. Summarize Long Documents
Summarize the following document in exactly 300 words. Your summary must: (1) State the main argument in the first 2 sentences. (2) Cover the 3 most important supporting points. (3) Include the most important conclusion or recommendation. (4) Note any specific data or claims that seem important enough to quote directly. Omit background information and tangents. Document: [paste text here]
15. Research Brief
Create a research brief on [topic]. For each subtopic, provide: a 2-sentence overview, the key findings or consensus, any major disagreements or open questions, and 2–3 recommended sources for deeper reading. Cover: [subtopics]. Format as a structured document with section headers.

Model-Specific Tips: ChatGPT, Claude, and Gemini

While the core prompting principles work across all major LLMs, each model has quirks and strengths that reward tailored approaches.

ChatGPT (GPT-4) Claude Gemini

ChatGPT (GPT-4)

Strengths: Wide general knowledge, strong code generation, excellent with creative and analytical tasks.

Key quirks:

Claude (Anthropic)

Strengths: Exceptional at long-form writing and analysis, strong reasoning, safer by default, large context window.

Key quirks:

Google Gemini

Strengths: Native Google search integration, multimodal (text, images, code), strong for research tasks.

Key quirks:

Common AI Prompt Mistakes (and How to Fix Them)

These five mistakes account for the majority of poor AI outputs. They're all fixable with better prompting technique.

MistakeWhy It FailsFix
Too vague The model generates something "correct" that doesn't solve your actual problem Be specific about format, length, audience, and tone
No role assignment The model defaults to "helpful generalist" — rarely what you need Start with "Act as a [specific role]"
Multi-part questions without structure Early parts of the answer set the tone, later parts get shallow treatment Use numbered lists for multi-part questions, or chain into separate prompts
Assuming context the model doesn't have The model cannot read your mind or your screen Include all relevant background, your industry, your product, your audience
Prompting once and accepting the output First outputs are often first-draft quality; iteration is normal Read the output, identify what's missing, and refine in a follow-up prompt

How to Use the SnapUtils AI Prompt Builder

The SnapUtils AI Prompt Builder turns these patterns into a guided, interactive tool. Instead of building prompts from scratch each time, you work through seven structured sections:

  1. Role / Persona — Define who the AI should be
  2. Context — Provide background the AI needs
  3. Task — What specifically you want done
  4. Output Format — How you want the response structured
  5. Constraints — Length limits, style requirements, rules
  6. Tone / Style — Formal, casual, technical, encouraging, etc.
  7. Examples — Show the AI what good output looks like

The tool provides a real-time Quality Score (0–10) that rates your prompt across five dimensions — Clarity, Specificity, Context, Format, and Constraints — so you know whether your prompt is actually ready before you use it.

It also includes 15 starter templates covering common use cases (Code Review, Blog Post, Data Analysis, Meeting Prep, Email, and more) and 6 enhancement patterns that you can apply to any prompt with one click.

Try the AI Prompt Builder

Build a structured, high-quality prompt in under 3 minutes. Free, no login, works entirely in your browser.

Start Building Prompts →

Frequently Asked Questions

Does prompt engineering still matter now that AI models are better?

Yes — more than ever. Better models are more sensitive to how you ask. The ceiling for what you can accomplish with a well-crafted prompt rises faster than the floor for poorly crafted ones. As models become more capable, the difference between a good prompt and a mediocre one becomes more impactful, not less. The models don't read your mind — they respond to text.

What's the most important thing to include in an AI prompt?

Context and format are the two highest-impact additions. "Write an email" produces generic output. "Write a 150-word cold outreach email to engineering managers at Series B startups, professional tone, single CTA to book a call" produces something specific and usable. The more context you give about your specific situation, the less editing you'll need to do afterward.

How do I get AI to stop giving generic responses?

Generic responses come from generic prompts. Add specificity about your audience, your industry, your product, and your goal. Add examples of what "good" looks like in your context. Add constraints: "in 100 words," "for non-technical readers," "optimized for LinkedIn." The model will match the specificity level of your prompt.

Should I use system prompts or user prompts?

Use both strategically. System prompts set persistent context — your role, communication style preferences, default format. User prompts are for the task at hand. In ChatGPT, the system prompt is the custom instructions field at the top. In Claude, it's the system prompt. Set your preferences once, then focus user prompts on the specific task.

What are tokens and why do they matter for prompting?

Tokens are the units LLMs process text in — roughly 4 characters or 0.75 words. Each model has a maximum context window (measured in tokens). Everything in your prompt — including the response it generates — counts against this limit. Longer prompts consume your available context faster. This is why being concise while being specific matters: you're trading off context space.

How do I use AI effectively for code-related tasks?

For code tasks, three additions make a significant difference: (1) Specify the language and framework explicitly. (2) Paste the relevant code and the error or goal in the same prompt. (3) Ask the AI to explain its reasoning before showing the code — this surfaces any misunderstanding early. For code reviews, use the RTF pattern: "Act as a senior [language] engineer with expertise in [framework]. Review this code for [specific issue types]."

Is it okay to ask AI to output in JSON or specific formats?

Yes — and it's one of the most useful things you can do. When you need structured data for automation, say so explicitly: "Return output as a JSON object with fields X, Y, Z. No markdown, no explanation, JSON only." Adding "JSON only" and "no markdown" is important — without explicit constraints, most models will wrap JSON in code fences or add explanatory text.

Related Tools and Articles