Back to Insights
AI

Prompt Engineering: Better Results From AI

Practical techniques for writing effective prompts that produce reliable AI outputs. Works across ChatGPT, Claude, Gemini, and other LLMs.

S5 Labs TeamFebruary 3, 2026

The difference between a mediocre AI interaction and a great one often comes down to how you ask. The same model that gives you a vague, unhelpful response to a poor prompt can produce exactly what you need when asked well. This isn’t magic—it’s prompt engineering, and it’s a skill anyone can learn.

Whether you’re using AI for work, building AI-powered products, or just trying to get better answers from ChatGPT, these practices will help you communicate more effectively with large language models.

Why Prompts Matter So Much

Large language models are trained to predict what text should come next. When you write a prompt, you’re setting up a context that shapes what the model considers “next.” A vague prompt creates a vague context, and the model fills in the gaps with generic, average responses. A specific prompt creates a specific context, guiding the model toward exactly what you need.

Think of it like giving directions. “Go that way” might eventually get someone to the destination, but “Take the second right, then continue straight for three blocks until you see the blue building” gets them there faster and more reliably. Prompts work the same way.

The good news: you don’t need to understand how neural networks work to write great prompts. You just need to understand what makes instructions clear and complete.

The Foundation: Be Specific

The single most impactful thing you can do is be specific. Vague prompts produce vague results. Every detail you add helps the model understand what you actually want.

Instead of: “Write about marketing.”

Try: “Write a 500-word blog post explaining the basics of content marketing for small business owners who are new to digital marketing. Focus on practical, low-cost tactics they can implement this week.”

The second prompt specifies:

  • The format (blog post)
  • The length (500 words)
  • The topic (content marketing basics)
  • The audience (small business owners, new to digital marketing)
  • The focus (practical, low-cost, immediately actionable)

Each specification narrows the range of possible responses, making it far more likely you’ll get something useful.

Questions to Ask Yourself

Before sending a prompt, run through this checklist:

  • What format do I want? (Email, bullet points, essay, code, table)
  • How long should it be? (One paragraph, 500 words, comprehensive)
  • Who is the audience? (Experts, beginners, executives, customers)
  • What tone is appropriate? (Formal, casual, technical, friendly)
  • What should be included? (Specific topics, examples, data)
  • What should be avoided? (Jargon, certain topics, assumptions)

You don’t need to specify all of these every time, but considering them helps you write better prompts.

Give Context and Background

AI models don’t know your situation unless you tell them. Context that seems obvious to you—your industry, your goals, your constraints—is invisible to the model. Providing relevant background dramatically improves results.

Weak prompt: “Help me respond to this customer complaint.”

Better prompt: “I run a small e-commerce business selling handmade jewelry. A customer emailed complaining that their order arrived a week late. We use a third-party shipping service, and the delay was caused by weather issues outside our control. Help me write a response that acknowledges their frustration, explains the situation without making excuses, and offers a goodwill gesture. Our typical approach is offering 15% off the next order.”

The context helps the model understand:

  • Your business type and size
  • What happened
  • What caused it
  • Your constraints
  • Your typical policies
  • What kind of response you’re looking for

Without this context, you’d get a generic customer service template. With it, you get a tailored response that fits your situation.

Define the Role

One effective technique is telling the model what role to adopt. This frames the entire response through a specific lens.

“You are an experienced tax accountant. Review this expense list and flag any items that might raise red flags during an audit, explaining why.”

“You are a patient teacher explaining quantum physics to a curious 12-year-old. Explain wave-particle duality using everyday analogies.”

“You are a skeptical editor reviewing this marketing copy. Point out any claims that need evidence, phrases that sound like hype, and promises we might not be able to keep.”

The role shapes not just what the model says, but how it thinks about the problem. A skeptical editor will approach copy differently than an enthusiastic marketer.

This technique is especially powerful when you need a perspective different from the default helpful assistant persona. Want criticism? Ask the model to be a critic. Want technical depth? Ask it to be a specialist. Want simplicity? Ask it to be a teacher for beginners.

Show, Don’t Just Tell

Examples are worth a thousand words of instruction. When you show the model what you want, it can pattern-match in ways that go beyond explicit instructions.

Without examples: “Categorize these customer reviews as positive, negative, or neutral.”

With examples: “Categorize these customer reviews as positive, negative, or neutral.

Examples:

  • ‘Love this product! Works perfectly.’ → Positive
  • ‘Arrived broken, complete waste of money.’ → Negative
  • ‘It’s fine, does what it says.’ → Neutral
  • ‘Great quality but shipping was slow.’ → Positive (product review, not shipping review)

Now categorize these reviews: [your reviews here]”

The examples do more than explain the categories—they show edge cases and your specific criteria. That last example clarifies that you’re categorizing based on product sentiment, not shipping experience. That nuance would be hard to convey in instructions alone.

This technique, called few-shot prompting, works especially well for:

  • Classification tasks
  • Formatting requirements
  • Tone and style matching
  • Complex or nuanced judgments

Two or three good examples often outperform paragraphs of explanation.

Structure Complex Requests

For multi-part tasks, structure helps both you and the model keep track of what needs to happen. Break complex requests into clear steps or sections.

Unstructured: “Analyze this business proposal and tell me what you think, including the financials and market opportunity and risks and whether we should do it.”

Structured: “Analyze this business proposal using the following framework:

  1. Executive Summary: In 2-3 sentences, what is this proposal about?

  2. Financial Analysis: Review the projected costs, revenue, and ROI. Are the assumptions reasonable? What’s missing?

  3. Market Opportunity: How well does the proposal understand the target market? Is the opportunity sizing credible?

  4. Key Risks: What are the top 3 risks that could cause this to fail?

  5. Recommendation: Based on the above, should we pursue this? What conditions would need to be true?

Here’s the proposal: [proposal text]”

The structure ensures comprehensive coverage, makes the output easier to read, and helps you verify that each aspect was addressed.

Be Explicit About Format

When you need output in a specific format—whether for presentation, processing, or integration with other tools—say so explicitly.

“Return your analysis as a markdown table with columns for Feature, Benefit, and Priority (High/Medium/Low).”

“Provide your response as a numbered list, with each item being one sentence maximum.”

“Return the extracted data as JSON with the following structure: {name: string, email: string, company: string}”

Don’t assume the model will guess your format preferences. Being explicit prevents frustrating reformatting work later.

This becomes critical when you’re using AI outputs programmatically. For production systems, see our technical guide on prompt engineering patterns for advanced techniques around structured outputs and validation.

Iterate and Refine

Your first prompt rarely produces the perfect result—and that’s fine. Treat prompting as a conversation, not a one-shot interaction.

Initial prompt: “Write a product description for our new running shoes.”

Response: [Generic, enthusiastic copy that could describe any running shoe]

Follow-up: “Make it more specific to trail running, emphasize the grip technology, and make the tone more understated—we want to sound confident but not salesy.”

Response: [Better, but still too long]

Follow-up: “Good direction. Condense this to 75 words maximum while keeping the key points about grip and durability.”

Each iteration refines the output. Pay attention to what’s working and what isn’t. If the model consistently misses something, that’s a signal your original prompt needs that element added explicitly.

Some people try to write the perfect prompt upfront. That’s usually slower than starting with something reasonable and iterating. The conversation is part of the process.

Handle Uncertainty Gracefully

AI models can sound confident while being completely wrong. This is worse than being obviously wrong because it’s harder to catch. Build uncertainty handling into your prompts.

“If you’re not confident about any part of this answer, explicitly say so and explain what additional information would help.”

“For each recommendation, rate your confidence as High, Medium, or Low and briefly explain why.”

“If this question is outside your knowledge area or if the answer depends heavily on information you don’t have, tell me rather than guessing.”

These instructions don’t guarantee honest uncertainty expression—models still hallucinate—but they help. You’re giving the model permission to say “I don’t know,” which it otherwise tends to avoid.

For important decisions, verify AI outputs against reliable sources. Don’t trust confident-sounding answers on factual matters without checking.

Avoid Common Pitfalls

A few patterns consistently lead to poor results:

Being Too Open-Ended

“Tell me about project management” could generate anything from a Wikipedia summary to a book chapter. If you don’t know what you want, the model will give you something generic.

Fix: Ask for something specific. “What are the three biggest mistakes new project managers make, and how can they avoid them?”

Burying the Key Request

Putting your main ask at the end of a long prompt means it might get less attention than the preamble.

Fix: Front-load the request. “I need help writing a rejection email. Context: I interviewed someone who was qualified but not the best fit…”

Assuming Shared Knowledge

References to “the project,” “our system,” or “what we discussed” mean nothing to the model.

Fix: Include necessary context, or explicitly reference earlier conversation turns if using a chat interface.

Asking Multiple Unrelated Questions

“What’s the best programming language, and also how should I structure my resume, and can you explain blockchain?” puts the model in a difficult position.

Fix: One topic per prompt, or explicitly structure multiple questions with clear separation.

Accepting the First Response

If the output isn’t quite right, refine rather than give up or manually fix everything.

Fix: Tell the model specifically what to change. “Good start, but make the tone more formal and add specific numbers where possible.”

Practical Applications

These principles apply whether you’re using AI for quick tasks or complex projects:

For writing assistance: Specify the audience, tone, length, and purpose. Provide examples of writing you like. Iterate on drafts rather than expecting perfection immediately.

For research and analysis: Define what you’re looking for, what format you need, and what your constraints are. Ask for sources when factual accuracy matters. Follow up on interesting points.

For coding help: Describe the context (language, framework, what you’re trying to achieve), provide relevant code snippets, and specify constraints. Ask for explanations alongside code.

For brainstorming: Set the scope and constraints, ask for multiple options, and request that the model explore different directions. Use follow-ups to dig deeper into promising ideas.

For business tasks: Provide the business context, stakeholder considerations, and success criteria. Be explicit about format requirements for documents or presentations.

Building the Skill

Prompt engineering improves with practice. Here’s how to accelerate your learning:

Save what works. When a prompt produces great results, save it. Build a personal library of effective prompts for tasks you do regularly.

Analyze failures. When results disappoint, ask why. Was the prompt too vague? Missing context? Poorly structured? Each failure teaches you something.

Experiment with variations. Try different approaches to the same task. You’ll discover that small changes can produce significantly different results.

Stay current. Models improve over time, and new techniques emerge. What works today may be superseded by better approaches. Follow AI developments, but don’t chase every trend—fundamentals matter most.

The Bigger Picture

Good prompt engineering is ultimately about clear communication. The skills that make you effective at instructing AI—being specific, providing context, structuring complex requests, iterating based on feedback—are the same skills that make you effective at communicating with humans.

As AI becomes more integrated into work and daily life, the ability to get good results from these tools becomes increasingly valuable. It’s not about tricks or hacks—it’s about learning to express what you want clearly and completely.

Start with these fundamentals, practice regularly, and you’ll find AI becomes a far more useful tool than it seemed at first.


Ready to go deeper? Our technical guide on prompt engineering patterns covers advanced techniques for production systems, including structured output enforcement, chain-of-thought prompting, and defensive prompt design.

Want to discuss this topic?

We'd love to hear about your specific challenges and how we might help.