Platform

Building Agent Prompts with Tools and MCP

Learn how to write prompts for AI agents that use tools, function calling, and MCP servers. Covers structuring agent system prompts, tool definitions, and publishing agent prompts on fireflare.

Building Agent Prompts with Tools and MCP

AI agents are a step beyond single-turn prompts. An agent uses a model as its reasoning core, paired with tools — functions the model can call to take action in the world: search the web, run code, read files, call APIs. The Model Context Protocol (MCP) has emerged as a standard for connecting models to tool servers, making it easier to build and share agent configurations.

This guide covers how to write effective agent prompts, structure tool definitions, and publish reusable agent configurations on fireflare.

What Makes Agent Prompting Different

A standard prompt gets a single response. An agent prompt is the system instruction for an ongoing reasoning loop:

System prompt (persistent)
  ↓
User request
  ↓
Model thinks → decides to use a tool → calls tool
  ↓
Tool returns result
  ↓
Model thinks again → calls another tool or responds
  ↓
... (repeats until task complete)

Your system prompt needs to guide this entire loop, not just a single response. It must specify:

  • The agent's role and goal
  • When to use which tools
  • How to handle tool errors
  • When to stop and report back
  • How to handle ambiguous situations

Anatomy of an Agent System Prompt

You are [role] that helps users [goal].

## Tools Available
[List tools and when to use each]

## Workflow
[Step-by-step process for the main task]

## Decision Rules
[When to use which tool, in what order]

## Error Handling
[What to do when tools fail]

## Completion Criteria
[When to stop and report, what to include in the final response]

## Constraints
[What the agent must never do]

Writing Tool Descriptions That Work

Tool descriptions are part of the agent's context. Poorly written tool descriptions are one of the most common causes of agent failure — the model calls the wrong tool or calls tools in the wrong order.

Poor tool description:

{
  "name": "search",
  "description": "Search for information"
}

Good tool description:

{
  "name": "web_search",
  "description": "Search the web for current information. Use this when you need facts that may have changed after your training cutoff, when you need to find specific URLs or sources, or when the user asks about recent events. Do NOT use this for general knowledge questions you can answer from training data.",
  "parameters": {
    "query": {
      "type": "string",
      "description": "The search query. Be specific — include the topic, relevant qualifiers, and date range if time-sensitive."
    },
    "num_results": {
      "type": "integer",
      "description": "Number of results to return. Default 5. Use 10 for broad research tasks."
    }
  }
}

Key elements of a good tool description:

  • When to use it — explicit trigger conditions
  • When NOT to use it — prevents unnecessary tool calls
  • Parameter guidance — not just what each parameter is, but how to use it well

Example: Research Agent System Prompt

You are a research assistant that produces factual, well-sourced summaries
on any topic the user requests.

## Available Tools
- web_search: Search the web for current information. Use for any topic that
  may have recent developments, statistics, or when sources are needed.
- read_url: Fetch and read a specific web page. Use to read the full content
  of a search result when the snippet isn't sufficient.
- save_note: Save a finding to the research scratchpad. Use to store key
  facts, quotes, and sources as you research.

## Research Workflow
1. Analyze the user's request. Identify the 3-5 main questions to answer.
2. For each question, perform a targeted web search.
3. Read the most relevant sources in full using read_url.
4. Save key findings using save_note.
5. Once you have sufficient information, synthesize a comprehensive response.

## Quality Standards
- Every factual claim must cite a source (URL and publication date)
- If sources conflict, note the conflict and present both perspectives
- If you cannot find reliable sources for a claim, say so explicitly
- Do not present information from training data as if it were a current source

## Stopping Criteria
Stop researching and write the summary when:
- You have at least 3 high-quality sources per main question, OR
- You have spent 8 or more tool calls researching
Report to the user if you couldn't find sufficient sources.

## Constraints
- Never fabricate citations
- Do not read paywalled content
- Do not save personally identifiable information to notes

MCP (Model Context Protocol)

MCP is a standard that lets AI models connect to external tool servers over a standardized protocol. Instead of each application implementing its own tool integrations, MCP servers expose tools that any compatible client (Claude, Cursor, etc.) can use.

Publishing MCP-Compatible Agent Prompts on fireflare

When publishing an agent prompt that uses MCP tools, include in your description:

  1. Which MCP server(s) are required — e.g., "Requires the Filesystem MCP server"
  2. Setup instructions — brief steps to configure the MCP connection
  3. Tool list — which specific tools from that server the prompt uses
  4. Compatible clients — which AI clients support the required MCP servers

Example agent prompt description:

This agent prompt requires the mcp-server-filesystem MCP server with read access to your project directory. Tested with Claude Desktop. The agent will read your codebase structure, identify architecture issues, and produce a prioritized improvement plan. Setup: add mcp-server-filesystem to your Claude Desktop config with path set to your project root.

Common Agent Prompt Mistakes

Underspecifying tool selection. Without guidance, the model calls tools randomly or uses the first available tool for everything. Specify trigger conditions for each tool.

No stopping criteria. Agents without explicit stopping conditions can loop indefinitely or stop prematurely. Define what "done" looks like.

No error handling. Tools fail. Define what the agent should do when a tool returns an error, empty results, or unexpected data.

Confusing goals with tasks. "Research this topic" is a goal. An effective agent prompt includes the specific tasks (what to search, how many sources, what to include in the output) to achieve that goal.

Key Takeaways

  • Agent prompts guide a multi-step reasoning loop, not a single response
  • Tool descriptions need explicit "when to use" and "when not to use" guidance
  • Always define stopping criteria and error handling
  • For MCP-based agents, document required servers and setup in the prompt description
  • Test agents with adversarial inputs — missing instructions appear fastest at the edges