Claude Prompt Best Practices
Claude (made by Anthropic) has a distinct prompt style that differs meaningfully from other models. Learning the Claude-specific conventions — especially XML tags, extended thinking, and tool use — can dramatically improve your results. This guide covers the patterns that work best as of Claude 3.5 and Claude Opus 4.
Why Claude-Specific Prompting Matters
All frontier models are capable, but they're trained differently and respond to different prompt structures. Techniques that work well for GPT-4 often transfer to Claude, but Claude has unique strengths and conventions worth learning:
- XML tags — Claude handles structured XML delimiters exceptionally well
- Extended thinking — Claude Opus 4 supports a visible reasoning mode
- Artifacts — Claude.ai's artifact feature for code and documents
- Long context — Claude's 200k context window handles large codebases and documents
- Constitutional training — Claude's values affect how it responds to edge cases
XML Tags for Structure
Claude is trained extensively on structured text, and XML-style tags are particularly effective for organizing complex prompts.
Basic Structure
<task>
Review the following code for security vulnerabilities.
</task>
<code>
def login(username, password):
query = f"SELECT * FROM users WHERE username = '{username}'"
result = db.execute(query)
return result
</code>
<output_format>
List vulnerabilities as: severity, description, line number, recommended fix.
</output_format>
Why XML Tags Work
XML tags:
- Clearly separate different types of content (instructions vs. data vs. format)
- Prevent the model from confusing user content with instructions
- Make it easy to programmatically replace content between tags
- Improve consistency in multi-turn conversations
Recommended Tag Patterns
<!-- For injecting user-provided content safely -->
<user_input>{{content}}</user_input>
<!-- For lengthy instructions -->
<instructions>...</instructions>
<!-- For examples -->
<examples>
<example>
<input>...</input>
<output>...</output>
</example>
</examples>
<!-- For document content -->
<document>
<title>{{title}}</title>
<content>{{body}}</content>
</document>
<!-- For separating system rules from context -->
<rules>...</rules>
<context>...</context>
Extended Thinking (Claude Opus 4)
Extended thinking is a mode where Claude reasons through a problem in a scratchpad before responding. It's especially powerful for:
- Complex math and logic
- Multi-step planning
- Code architecture decisions
- Legal or financial reasoning
How to Enable Extended Thinking
Via the API:
import anthropic
client = anthropic.Anthropic()
response = client.messages.create(
model="claude-opus-4-5",
max_tokens=16000,
thinking={
"type": "enabled",
"budget_tokens": 10000 # tokens allocated for thinking
},
messages=[{
"role": "user",
"content": "Design the database schema for a multi-tenant SaaS application with row-level security."
}]
)
In Claude.ai: Enable "Extended thinking" in the model settings.
Prompting With Extended Thinking
When thinking is enabled:
- You don't need to add "think step by step" — the model already reasons before responding
- Ask for the reasoning process explicitly if you want to see it: "Walk me through your reasoning"
- Extended thinking significantly improves performance on ambiguous or multi-constraint problems
- Keep the final response instructions separate from the thinking instructions
System Prompt Conventions for Claude
Use a Persona Grounded in Values
Claude responds well to role definitions that include not just what it does but how it approaches its work:
You are a pragmatic senior engineer who values simplicity over cleverness.
You prefer battle-tested solutions over cutting-edge approaches for
production systems. You always ask clarifying questions before designing
solutions to ambiguous problems.
Specify Uncertainty Handling
When you're uncertain about a fact:
- State your uncertainty explicitly ("I believe X, but verify this")
- Provide a confidence level if helpful (high/medium/low)
- Suggest authoritative sources where appropriate
Control Sycophancy
Claude's training makes it naturally collaborative, which can sometimes manifest as excessive agreement. Counter this:
Do not validate or agree with incorrect premises. If the user states something
factually wrong, correct it directly before proceeding.
Do not start responses with affirmations like "Certainly!", "Great question!",
or "Of course!"
Artifacts in Claude.ai
Artifacts are self-contained content blocks (code files, markdown documents, HTML pages) that appear in a separate panel in Claude.ai and can be edited interactively.
When Claude Creates Artifacts
Claude automatically creates an artifact when the response:
- Is substantial code (any language)
- Is a complete document (markdown, HTML)
- Is designed to be used standalone
Explicitly Requesting Artifacts
Write a complete Python script as an artifact. The script should:
[requirements]
Make it production-ready with error handling and docstrings.
Iterating on Artifacts
Artifacts support in-place editing:
"Add input validation to the form fields"
"Change the color scheme to dark mode"
"Add unit tests at the bottom"
Each instruction updates the existing artifact rather than generating a new one.
Tool Use (Function Calling)
Claude's tool use follows the same pattern as other models but with Claude-specific nuances.
Defining Tools
tools = [
{
"name": "get_weather",
"description": "Get current weather for a location. Use this when the user asks about current weather conditions or forecasts.",
"input_schema": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City name or zip code"
}
},
"required": ["location"]
}
}
]
Tool Selection Guidance
Claude generally makes good tool selection decisions, but explicit guidance helps:
Use get_weather only for real-time weather queries. For historical weather
patterns or general climate information, answer from your training data.
Long Context Best Practices
Claude's 200k context window is one of its most valuable features. Use it effectively:
- Put the task at both ends — state your task at the top and repeat the key instruction at the bottom for very long contexts
- Use explicit section headers — "The relevant code is in the IMPLEMENTATION section below"
- Point to relevant sections — "Focus your review on the authentication functions, which are tagged with "
Key Takeaways
- XML tags are Claude's native structure — use them for any complex prompt
- Extended thinking dramatically improves accuracy on hard reasoning tasks
- Counter sycophancy explicitly in system prompts
- Artifacts in Claude.ai support iterative editing — great for code and documents
- Claude's 200k context handles entire codebases — use section labels to direct attention