Skip to main content
Firecrawl is a web scraping API that turns websites into clean, structured data. The Firecrawl MCP server lets you search, scrape, crawl, and extract data from websites using natural language.

What Can It Do?

  • Search the web with optional scraping and source filtering
  • Scrape single URLs for content in markdown, HTML, or other formats
  • Map websites to get all URLs ordered by relevance
  • Crawl entire sites and extract content from multiple pages
  • Deep extract data by autonomously navigating and exploring links

Where to Use It

Add Firecrawl as a tool to any agent. The agent can then scrape and extract web data conversationally, choosing the right actions based on context. To add an MCP tool to your agent:
  1. Open your agent’s configuration
  2. Click Add toolsConnect an app with MCP
  3. Search for the integration and select it
  4. Authenticate with your account
You can control which tools your agent has access to. After adding an integration, click on it to enable or disable specific tools based on what your agent needs.

In Workflows (Via Agent Node)

For automated pipelines, use an Agent Node with Firecrawl tools. This gives you the flexibility of an agent within a deterministic workflow.

As a Custom MCP Node

You can also create a standalone MCP node for a specific action. This generates a reusable node that performs one task, useful when you need the same operation repeatedly in workflows.
To create a custom MCP node:
  1. Go to your node library and search for the integration
  2. Click Create a node with AI
  3. Describe the specific action you want (e.g., “Scrape this URL and get the content”)
  4. Test the node and save it for reuse
Custom MCP nodes are single-purpose by design. For tasks that require multiple steps or dynamic decision-making, use an agent instead.

Available Tools

ToolDescriptionCredits
SearchSearch the web and optionally scrape full page content. Returns results organized by source type (web, images, news).8 per item
ScrapeScrape a single URL and extract content in various formats.8
MapGet all URLs from a website. Returns a list of URLs ordered by relevance.1
CrawlCrawl a website and extract content from multiple pages.40
Get Crawl StatusGet the status and results of a crawl job.8 per item
Batch ScrapeScrape multiple URLs at once.40
Get Batch Scrape StatusGet the status and results of a batch scrape job.8 per item
Deep ExtractAutonomously navigate and extract data from websites based on a prompt. Unlike regular extract, this explores links and pages to find relevant data.120
Get Deep Extract StatusGet the status and results of a deep extract job.3

Example Prompts

Use these with your agent or in the Agent Node: Scrape a page:
Scrape this URL and get the main content as markdown
Search the web:
Search for "AI startup funding" and get the top 10 results
Map a website:
Get all URLs from example.com
Crawl a site:
Crawl example.com/blog with depth 2 and get all article content
Deep extract:
Extract pricing information from this SaaS website, exploring all relevant pages
Batch scrape:
Scrape these 5 URLs and get the main content from each

Troubleshooting

IssueSolution
Agent not finding the right dataEnsure the URL is publicly accessible
Action not completingCheck that you’ve authenticated and have sufficient Firecrawl credits
Unexpected resultsThe agent may chain multiple tools (e.g., mapping first, then scraping). Review the agent’s reasoning to understand its approach.
Tool not availableVerify the tool is enabled in your agent’s MCP configuration
Agents are smart enough to chain multiple API calls together. For example, asking “Get all blog posts from this site” will map the URLs first, then scrape each one. If results seem off, check the agent’s step-by-step reasoning.

Need Help?


Use this integration directly in Claude or Cursor. Connect remotely via the Firecrawl MCP server using your Gumloop credentials.