ESC
Type to search guides, tutorials, and reference documentation.

Zero-Shot vs Few-Shot Prompting

Understanding zero-shot and few-shot prompting strategies for code generation — when to use each, how to craft effective examples, and how they affect output quality.

What Are Zero-Shot and Few-Shot Prompting?

Zero-shot prompting asks an AI model to complete a task with no examples — just a description of what you want. “Write a function that validates an email address” is a zero-shot prompt.

Few-shot prompting gives the model one or more examples of the desired input-output pair before asking it to complete the task. “Here are two examples of validation functions I’ve written. Now write one for email validation, following the same pattern.”

The distinction is critical in code generation contexts because programming tasks often have specific stylistic, structural, and behavioral requirements that are difficult to fully specify in prose but easy to illustrate with examples.

When Zero-Shot Works

Zero-shot is appropriate when:

The task is well-defined and standard: Writing a binary search algorithm, sorting a list, parsing JSON — tasks where there is an objectively clear implementation that the model has seen thousands of times in training data.

The requirements can be fully specified in prose: If you can completely describe what the code must do (including edge cases, error handling, return types, and constraints) without ambiguity, zero-shot with a detailed prompt often performs as well as few-shot.

Rapid prototyping is the goal: When you want a working draft quickly to iterate from, zero-shot is faster to set up than few-shot.

The model knows your stack well: For popular frameworks like React, Django, or Spring, the model has seen extensive examples and can infer stylistic conventions from framework knowledge.

When Few-Shot Is Superior

Few-shot prompting produces meaningfully better results when:

Your codebase has specific conventions: Custom error handling patterns, proprietary utility functions, unusual naming conventions, or team-specific architectural patterns that the model cannot infer from framework knowledge.

The output format is precise: When you need the generated code to look like your existing code — same indentation style, same JSDoc format, same import ordering — a single example is worth a paragraph of description.

The task is complex or novel: When the task is outside well-trodden ground (custom DSL parsing, specialized data transformations, proprietary API integrations), examples dramatically improve reliability.

You’ve seen zero-shot fail repeatedly: If the model consistently misses something in zero-shot, adding an example that demonstrates the correct handling often resolves the issue faster than refining the prose description.

Crafting Effective Few-Shot Examples

The quality of few-shot examples determines how much they help. Effective examples:

Closely Match the Target Task

The example should be from the same domain, use the same technology stack, and have similar complexity to the task you’re asking for. An example of a React component is more useful than an example of a Python function if you’re asking for more React components.

Demonstrate the Non-Obvious Parts

Don’t show the model how to do the parts it already knows how to do. Show it how to handle the specific pattern you need that it might get wrong:

// Example: This is how we handle database errors in this project
async function getUserById(id: string): Promise<Result<User, DbError>> {
  try {
    const user = await db.user.findUnique({ where: { id } });
    if (!user) return { ok: false, error: { code: 'NOT_FOUND' } };
    return { ok: true, value: user };
  } catch (e) {
    logger.error('db.getUserById failed', { id, error: e });
    return { ok: false, error: { code: 'DB_ERROR' } };
  }
}

// Now write getOrderById following this exact error handling pattern

Be Complete, Not Truncated

Incomplete examples with // ... rest of the code force the model to infer what was omitted. Include the complete example.

Use Real Code, Not Toy Examples

If possible, use actual code from your codebase as the example. This anchors the model to your real conventions rather than idealized conventions that may differ from what you actually use.

Zero-Shot Strategies That Close the Gap

When few-shot isn’t practical (you don’t have a good example, or the task is too novel to have examples), these zero-shot strategies improve reliability:

Maximize specificity in the task description: Include exact type signatures, precise behavior specifications for each edge case, and explicit statements of what the code should not do.

Include constraint-first framing: List the requirements before the task, not after. “These constraints must be met: [constraints]. With that in mind, write [task].”

Ask for the design before the implementation: “Before writing the code, describe in one paragraph how you’ll approach this, including the data structures and algorithm you’ll use.” This surfaces design mismatches before they become code.

Decompose complex tasks: Break a complex zero-shot task into multiple simpler zero-shot tasks. Generating a complete authentication system in one prompt is likely to produce generic output; generating the password hashing utility, then the session manager, then the route middleware separately produces more reliable results.

Combining Approaches

The most effective prompting for production code often combines both strategies:

  1. Use few-shot to anchor stylistic and structural conventions (1-2 examples from your codebase)
  2. Use detailed zero-shot specification for the specific task requirements
  3. Ask the model to explain its approach before generating, to catch design issues early

This hybrid approach benefits from the stylistic reliability of few-shot and the behavioral precision of detailed zero-shot specification.

📬

Before you go...

Join developers getting the best vibe coding insights weekly.

No spam. One email per week. Unsubscribe anytime.