How to Write AI Prompts: The Complete Guide
Most people use AI wrong — not because they're不懂技术, but because they don't know how to ask. The difference between a vague prompt and a precise one can be the difference between a useless wall of text and exactly what you needed.
This guide covers everything you need to write effective AI prompts: the core principles that work across every model, five proven prompt patterns with real examples, model-specific tips for ChatGPT, Claude, and Gemini, and the mistakes most people make. By the end, you'll know why your prompts underperform and how to fix them.
Build Better Prompts in Minutes
Use the SnapUtils AI Prompt Builder to structure prompts with real-time quality scoring, 15 starter templates, and 6 enhancement patterns. 100% free, no login required.
Open AI Prompt Builder →What Is an AI Prompt?
An AI prompt is the input you give to a large language model (LLM) to request a response. A prompt can be a single sentence, a multi-paragraph instruction, or a structured conversation with multiple exchanges. The term "prompt engineering" refers to the practice of writing inputs that reliably produce the desired output — not just any output, but one that actually solves your problem.
The key insight: LLMs don't guess what you want — they predict what text should come next based on everything you've told them. Your prompt is that context. The better your context, the more accurate the prediction.
The 6 Core Principles of Effective AI Prompts
Before diving into patterns and examples, internalize these six principles. Every great prompt follows them:
1. Be Specific, Not Vague
"Write something" produces nothing useful. "Write a 150-word product description for a B2B SaaS tool targeting engineering managers, focusing on time savings, with a confident tone" produces something you can work with. The more specific you are about format, length, audience, and tone, the less editing you'll need to do afterward.
2. Give the AI a Role or Persona
Telling the AI who to be dramatically changes its output. "Act as a senior software engineer with 15 years of experience reviewing pull requests" produces different advice than "act as a junior developer." Role-setting focuses the model's knowledge and calibrates its vocabulary and assumptions.
3. Define the Output Format Upfront
Tell the AI how you want the response structured before it generates it. "Format as a table with columns: Name, Strengths, Weaknesses, Recommended Use Case" is far more useful than "give me a comparison." The model will format it correctly on the first pass rather than forcing you to re-prompt.
4. Provide Context — Don't Assume Prior Knowledge
LLMs have no memory of previous conversations (unless you include the history). Every new conversation starts blank. If your request depends on context — your industry, your product, your audience — include it in the prompt. The AI can't read your mind, and it won't ask follow-up questions unless you train it to.
5. Use Constraints to Focus the Output
Constraints are more powerful than instructions. "Write 200 words" is more useful than "write a short description." "Use simple language for non-technical readers" is more actionable than "don't be too technical." Constraints cut off irrelevant directions and force relevance.
6. Chain Complex Tasks into Steps
A single prompt attempting to cover a complex multi-step task produces a shallow response. Breaking it into sequential prompts — where each step's output feeds into the next — produces far better results. This is the chain-of-thought approach, and it's one of the most reliable patterns for complex reasoning tasks.
The 5 Proven Prompt Patterns
These five patterns cover the vast majority of practical use cases. Each has been tested across thousands of real-world prompts and consistently produces better results than unstructured prompting.
Pattern 1: The Role + Task + Format (RTF) Pattern
The RTF pattern is the workhorse of effective prompting. It assigns a persona, specifies the task, and demands a particular format — all in one prompt.
Format
Act as [ROLE]. [TASK]. Format as [OUTPUT FORMAT].
Write some code.
Act as a senior Python developer with expertise in FastAPI. Write a REST API endpoint for user registration with email validation, password hashing using bcrypt, and a PostgreSQL database insert. Return the full route handler code with type hints and error handling. Format as a single, production-ready Python file.
This pattern is particularly effective because the role instruction activates relevant knowledge clusters in the model's training data. "Senior Python developer" brings different assumptions, vocabulary, and code quality expectations than "beginner coder."
Pattern 2: The Few-Shot Example Pattern
Instead of describing what you want in abstract terms, show examples of the exact input-output behavior you expect. This is called few-shot learning — providing 2–5 examples that demonstrate the pattern.
Format
Here are examples of [TASK]: [EXAMPLES]. Now do the same for: [NEW INPUT].
Categorize this customer feedback as positive, negative, or neutral.
Categorize the following customer feedback as POSITIVE, NEGATIVE, or NEUTRAL. Here are examples: Input: "Best purchase I've made in years. Works exactly as described." → NEGATIVE Input: "Shipping was fine, product arrived." → NEUTRAL Input: "Totally floored by how fast this is. Worth every penny." → POSITIVE Now categorize this: Input: "Took 3 weeks to arrive and the package was damaged. Very disappointed."
Few-shot prompting is dramatically more effective than zero-shot for tasks where the output style is specific or non-obvious. It removes ambiguity about what "positive" or "negative" means in your specific context.
Pattern 3: The Chain-of-Thought (CoT) Pattern
Ask the model to reason step by step. This pattern is surprisingly powerful — it routinely improves accuracy on complex reasoning tasks by 30–50%. You can use it by simply appending "think through this step by step" or "let's think about this carefully."
Format
[COMPLEX TASK]. Think through this step by step, showing your reasoning for each step.
Candidate A: 10 years experience, strong portfolio, but salary ask is 20% above budget.
Candidate B: 5 years experience, excellent references, salary ask is at budget.
Candidate C: 3 years experience, fastest growth trajectory, salary ask is 10% below budget.
Think through this step by step, weighing the tradeoffs of experience vs. budget vs. growth potential. Recommend one candidate with clear reasoning.
The chain-of-thought pattern works because it forces the model to externalize intermediate reasoning steps — catching errors in logic before they compound into a wrong final answer. For math, code debugging, and strategic decisions, this is the single highest-impact addition you can make to any prompt.
Pattern 4: The Constrained Output Pattern
When you need machine-readable or highly structured output, constrain the format explicitly. This pattern is essential when the output feeds into downstream automation, a spreadsheet, or a database.
Format
[TASK]. Return output as [CONSTRAINED FORMAT, e.g., JSON, CSV, table]. No additional text.
The constraint "No additional text — JSON only" is critical. Without it, most models will wrap the JSON in markdown code fences or add explanatory text that breaks parsers.
Pattern 5: The Self-Critique / Iterative Improvement Pattern
Ask the AI to generate a response, then critique it against your criteria, then revise. This two-step pattern produces significantly higher-quality output than asking for the best version in one pass.
Format
[TASK]. Then review your response against these criteria: [CRITERIA]. Revise anything that doesn't meet them.
15 Real AI Prompt Examples You Can Copy Today
Patterns are useful, but examples are faster. Here are 15 working prompts organized by use case — copy them directly or adapt them to your situation.
Code & Development
Writing & Content
Analysis & Decision-Making
Learning & Teaching
Data & Research
Model-Specific Tips: ChatGPT, Claude, and Gemini
While the core prompting principles work across all major LLMs, each model has quirks and strengths that reward tailored approaches.
ChatGPT (GPT-4)
Strengths: Wide general knowledge, strong code generation, excellent with creative and analytical tasks.
Key quirks:
- System prompt persistence: Instructions in the system prompt are persistent across messages in a session. Use this to set permanent context like "always explain in plain language" or "default to markdown formatting."
- Token limits: GPT-4 has a ~8,000-token context window in standard ChatGPT. Include only the most relevant context — verbose prompts don't help.
- Plugin/web search: ChatGPT Plus plugins can access real-time information. For research tasks, enable web browsing and be explicit: "Use web search to find recent data on..."
- Structured output: GPT-4 is better than older models at following format constraints (JSON, tables, etc.) in the same message. Still add "Return JSON only, no markdown" for safety.
Claude (Anthropic)
Strengths: Exceptional at long-form writing and analysis, strong reasoning, safer by default, large context window.
Key quirks:
- Massive context window: Claude can handle very long documents — use this to your advantage. Instead of summarizing first, paste the full source and ask specific questions about it.
- XML/JSON formatting preference: Claude responds particularly well to prompts structured with XML-like tags:
<task>,<context>,<output>. This helps it disambiguate the different parts of complex prompts. - Ethical alignment: Claude is more conservative by default about potentially harmful requests. If your legitimate request gets blocked, reframe the end goal rather than the harmful step — state what you're trying to accomplish, not the method.
- Thinking mode: Claude supports extended thinking (similar to chain-of-thought) for complex tasks. For multi-step analysis, say: "Think through this problem step by step before giving your final answer."
Google Gemini
Strengths: Native Google search integration, multimodal (text, images, code), strong for research tasks.
Key quirks:
- Real-time search integration: Gemini Ultra can access live Google search results. For anything requiring current information, say: "Use Google search to find..." This is a genuine advantage over models without search access.
- Multimodal by default: Gemini handles images, code, and text in the same context. For tasks involving visual data or diagrams, describe the image alongside the text in the same message.
- More literal interpretation: Gemini tends to follow instructions more literally. For creative tasks, add explicit style or tone cues: "Write this with a dry, witty tone — not inspirational." Without it, output tends toward the generic.
- Context resets: Gemini's session context behavior can be less persistent than ChatGPT. For long projects, paste critical context at the start of each new message rather than relying on prior turns.
Common AI Prompt Mistakes (and How to Fix Them)
These five mistakes account for the majority of poor AI outputs. They're all fixable with better prompting technique.
| Mistake | Why It Fails | Fix |
|---|---|---|
| Too vague | The model generates something "correct" that doesn't solve your actual problem | Be specific about format, length, audience, and tone |
| No role assignment | The model defaults to "helpful generalist" — rarely what you need | Start with "Act as a [specific role]" |
| Multi-part questions without structure | Early parts of the answer set the tone, later parts get shallow treatment | Use numbered lists for multi-part questions, or chain into separate prompts |
| Assuming context the model doesn't have | The model cannot read your mind or your screen | Include all relevant background, your industry, your product, your audience |
| Prompting once and accepting the output | First outputs are often first-draft quality; iteration is normal | Read the output, identify what's missing, and refine in a follow-up prompt |
How to Use the SnapUtils AI Prompt Builder
The SnapUtils AI Prompt Builder turns these patterns into a guided, interactive tool. Instead of building prompts from scratch each time, you work through seven structured sections:
- Role / Persona — Define who the AI should be
- Context — Provide background the AI needs
- Task — What specifically you want done
- Output Format — How you want the response structured
- Constraints — Length limits, style requirements, rules
- Tone / Style — Formal, casual, technical, encouraging, etc.
- Examples — Show the AI what good output looks like
The tool provides a real-time Quality Score (0–10) that rates your prompt across five dimensions — Clarity, Specificity, Context, Format, and Constraints — so you know whether your prompt is actually ready before you use it.
It also includes 15 starter templates covering common use cases (Code Review, Blog Post, Data Analysis, Meeting Prep, Email, and more) and 6 enhancement patterns that you can apply to any prompt with one click.
Try the AI Prompt Builder
Build a structured, high-quality prompt in under 3 minutes. Free, no login, works entirely in your browser.
Start Building Prompts →Frequently Asked Questions
Does prompt engineering still matter now that AI models are better?
Yes — more than ever. Better models are more sensitive to how you ask. The ceiling for what you can accomplish with a well-crafted prompt rises faster than the floor for poorly crafted ones. As models become more capable, the difference between a good prompt and a mediocre one becomes more impactful, not less. The models don't read your mind — they respond to text.
What's the most important thing to include in an AI prompt?
Context and format are the two highest-impact additions. "Write an email" produces generic output. "Write a 150-word cold outreach email to engineering managers at Series B startups, professional tone, single CTA to book a call" produces something specific and usable. The more context you give about your specific situation, the less editing you'll need to do afterward.
How do I get AI to stop giving generic responses?
Generic responses come from generic prompts. Add specificity about your audience, your industry, your product, and your goal. Add examples of what "good" looks like in your context. Add constraints: "in 100 words," "for non-technical readers," "optimized for LinkedIn." The model will match the specificity level of your prompt.
Should I use system prompts or user prompts?
Use both strategically. System prompts set persistent context — your role, communication style preferences, default format. User prompts are for the task at hand. In ChatGPT, the system prompt is the custom instructions field at the top. In Claude, it's the system prompt. Set your preferences once, then focus user prompts on the specific task.
What are tokens and why do they matter for prompting?
Tokens are the units LLMs process text in — roughly 4 characters or 0.75 words. Each model has a maximum context window (measured in tokens). Everything in your prompt — including the response it generates — counts against this limit. Longer prompts consume your available context faster. This is why being concise while being specific matters: you're trading off context space.
How do I use AI effectively for code-related tasks?
For code tasks, three additions make a significant difference: (1) Specify the language and framework explicitly. (2) Paste the relevant code and the error or goal in the same prompt. (3) Ask the AI to explain its reasoning before showing the code — this surfaces any misunderstanding early. For code reviews, use the RTF pattern: "Act as a senior [language] engineer with expertise in [framework]. Review this code for [specific issue types]."
Is it okay to ask AI to output in JSON or specific formats?
Yes — and it's one of the most useful things you can do. When you need structured data for automation, say so explicitly: "Return output as a JSON object with fields X, Y, Z. No markdown, no explanation, JSON only." Adding "JSON only" and "no markdown" is important — without explicit constraints, most models will wrap JSON in code fences or add explanatory text.
Related Tools and Articles
- AI Prompt Builder — Build structured prompts with real-time quality scoring, 15 templates, and 6 enhancement patterns
- Word Counter — Track word and character counts while writing prompts and content
- Regex Tester — Test and debug regular expressions used in AI prompt parsing
- JSON Formatter — Validate and format JSON responses from AI APIs
- Case Converter — Convert text between naming conventions for variables and prompts
- JSON Formatter Guide — Format and validate JSON for API integrations