Skip to main content
Search, scrape, and map websites using Firecrawl’s web data API. Turn any website into clean, structured data for your workflows with powerful crawling, scraping, and extraction capabilities.
Leverage Firecrawl’s advanced web scraping tools to search the web, extract content from URLs, crawl entire websites, and pull structured data using LLM-powered extraction.

How to Use MCP Nodes

What is Firecrawl MCP?

The Firecrawl MCP creates a customized node that understands Firecrawl’s web data API and responds to natural language prompts. Describe what you need, and the node executes Firecrawl tools to search, scrape, crawl, map, or extract data from websites. Results come back as structured data that you can pass to the next step in your workflow.

What Can It Do for You?

  • Search the web with optional scraping, supporting multiple sources, time filtering, geo-targeting, and category filtering
  • Scrape single URLs and extract content in various formats including markdown, HTML, links, screenshots, and images
  • Map entire websites to discover all URLs with sitemap control
  • Crawl websites and extract content from multiple pages with natural language prompts, path filtering, and depth control
  • Extract structured data from URLs using LLM with JSON Schema for precise data extraction

Available Tools

ToolWhat It DoesExample Use
SearchSearch the web with optional scraping. Supports multiple sources, time filtering, geo-targeting, and category filtering”Search the web for search term and return structured data with title, url, and snippet for the top result count results”
ScrapeScrape a single URL and extract content in various formats (markdown, summary, HTML, links, screenshots, images, branding)“Scrape url and return structured data with title, markdown content, and all links found on the page”
MapGet all URLs from a website with sitemap control”Map the website url and return structured data with all discovered URLs”
CrawlCrawl a website and extract content from multiple pages. Supports natural language prompts, path filtering, and depth control”Crawl url with depth depth and return structured data with page titles, URLs, and markdown content”
Get Crawl StatusGet the status and results of a crawl job using the crawl ID”Using crawl ID crawl_id, get the crawl status and return structured data with status, completed pages, and results”
Batch ScrapeScrape multiple URLs at once with shared scrape options”Batch scrape urls and return structured data with title, url, and markdown content for each page”
Get Batch Scrape StatusGet the status and results of a batch scrape job using the batch ID”Using batch ID batch_id, get the batch scrape status and return structured data with status and results”
ExtractExtract structured data from URLs using LLM with JSON Schema”Extract fields from url and return structured data matching the schema”
Get Extract StatusGet the status and results of an extract job using the extract ID”Using extract ID extract_id, get the extract status and return structured data with status and extracted data”

Credit Costs

ToolCredits Per Use
Search85 per item
Scrape80 credits
Map10 credits
Crawl80 per item
Get Crawl Status80 per item
Batch Scrape80 per item
Get Batch Scrape Status80 per item
Extract300 credits
Get Extract Status3 credits

How to Use

1

Create Your Firecrawl MCP Node

Go to your node library, search for Firecrawl, and click “Create a node with AI”
2

Add Your Prompt

Drag the Firecrawl MCP node to your canvas and add your prompt in the text box.
3

Test Your Node

Run the node to see the results. If it works as expected, you’re all set. If you need adjustments, check the troubleshooting tips below.
4

Save and Reuse

Once your Firecrawl MCP node is working, save it to your library. You can now use this customized node in any workflow.

Example Prompts

Here are some prompts that work well with Firecrawl MCP: Web Search with Scraping:
Search the web for `search term` and return structured data with title, url, and snippet for the top 10 results
Scrape a Single Page:
Scrape `url` and return structured data with title, markdown content, meta description, and all links found on the page
Map a Website:
Map the website `url` and return structured data with all discovered URLs and their paths
Crawl Multiple Pages:
Crawl `url` with depth 2 and return structured data with page title, url, and markdown content for each page
Extract Structured Data:
Extract company name, pricing plans, and features from `url` and return structured data matching the provided schema
Batch Scrape Multiple URLs:
Batch scrape `urls` and return structured data with title, url, and main content for each page
Start with a simple scrape or map operation to understand the website structure before running larger crawl jobs. Firecrawl MCP works best with focused, single-action prompts for faster runs and easier reuse.

Troubleshooting

If your Firecrawl MCP node isn’t working as expected, try these best practices:

Keep Prompts Simple and Specific

  • Good: “Scrape url and return title and markdown content”
  • Less Efficient: “Scrape url, summarize the content, extract all company information, and format it as a report”
While the longer prompt might run, it’s more efficient to break it into separate nodes. Firecrawl MCP works best with focused, single-action prompts.

Match What Firecrawl Can Do

  • Good: “Crawl url with depth 2 and return page titles and URLs”
  • Less Efficient: “Crawl url and send the results to a Google Sheet”
Firecrawl MCP excels at web scraping, crawling, and data extraction. For writing to spreadsheets, combine it with Google Sheets nodes in your workflow or agent.

Break Complex Tasks Into Steps

Trying to do everything in one prompt can sometimes lead to timeouts or unexpected results:
Crawl the entire website, extract all product information, compare prices, and generate a summary report
A more efficient approach is to split this into smaller, focused nodes:
1

Step 1: Map the Website

Map the website url and return structured data with all discovered URLs
2

Step 2: Scrape Product Pages

For each product URL, scrape and return structured data with title, price, and description
3

Step 3: Generate Report

Using the scraped data, create a summary report with Ask AI and return structured data with key findings
In your workflow, connect these nodes sequentially. The output from Step 1 becomes the input for Step 2, and Step 2’s data feeds into Step 3.

Focus on Data Retrieval

Firecrawl MCP is great at getting information from websites. For analysis or content creation, connect it to AI nodes in your workflow or agent. Example:
  • Good prompt: “Scrape url and return structured data with title, author, published date, and main content”
  • Less Efficient: “Scrape url and write a blog post summary about the content”
Use Ask AI for writing summaries or deeper analysis. Keep Firecrawl focused on scraping and extracting data, then hand off to the right node.

Troubleshooting Node Creation

If you’re seeing empty outputs in the node creation window (or if you’ve already created the node, hover over it and click “Edit”), use the chat interface to prompt the AI to add debug logs and verify the API response. Specifically mention that you received empty outputs.
In the node creation window (or if you’ve already created the node, hover over it and click “Edit”), use the chat interface to describe in detail what you expected versus what you received.
MCP node creation often requires a few tweaks. Use the chat interface in the node creation window to refine filters, output fields, or pagination. The AI will adjust the node based on your feedback.

Need More Help?