Firecrawl is a web scraping API that turns websites into clean, structured data. The Firecrawl MCP server lets you search, scrape, crawl, and extract data from websites using natural language.Documentation Index
Fetch the complete documentation index at: https://docs.gumloop.com/llms.txt
Use this file to discover all available pages before exploring further.
What Can It Do?
- Search the web with optional scraping and source filtering
- Scrape single URLs for content in markdown, HTML, or other formats
- Map websites to get all URLs ordered by relevance
- Crawl entire sites and extract content from multiple pages
- Deep extract data by autonomously navigating and exploring links
Where to Use It
In Agents (Recommended)
Add Firecrawl as a tool to any agent. The agent can then scrape and extract web data conversationally, choosing the right actions based on context. To add an MCP tool to your agent:- Open your agent’s configuration
- Click Add tools → Connect an app with MCP
- Search for the integration and select it
- Authenticate with your account
In Workflows (Via Agent Node)
For automated pipelines, use an Agent Node with Firecrawl tools. This gives you the flexibility of an agent within a deterministic workflow.As a Custom MCP Node
You can also create a standalone MCP node for a specific action. This generates a reusable node that performs one task, useful when you need the same operation repeatedly in workflows.- Go to your node library and search for the integration
- Click Create a node with AI
- Describe the specific action you want (e.g., “Scrape this URL and get the content”)
- Test the node and save it for reuse
Custom MCP nodes are single-purpose by design. For tasks that require multiple steps or dynamic decision-making, use an agent instead.
Available Tools
| Tool | Description | Credits |
|---|---|---|
| Search | Search the web and optionally scrape full page content. Returns results organized by source type (web, images, news). | 8 per item |
| Scrape | Scrape a single URL and extract content in various formats. | 8 |
| Map | Get all URLs from a website. Returns a list of URLs ordered by relevance. | 1 |
| Crawl | Crawl a website and extract content from multiple pages. | 40 |
| Get Crawl Status | Get the status and results of a crawl job. | 8 per item |
| Batch Scrape | Scrape multiple URLs at once. | 40 |
| Get Batch Scrape Status | Get the status and results of a batch scrape job. | 8 per item |
| Deep Extract | Autonomously navigate and extract data from websites based on a prompt. Unlike regular extract, this explores links and pages to find relevant data. | 120 |
| Get Deep Extract Status | Get the status and results of a deep extract job. | 3 |
Example Prompts
Use these with your agent or in the Agent Node: Scrape a page:Troubleshooting
| Issue | Solution |
|---|---|
| Agent not finding the right data | Ensure the URL is publicly accessible |
| Action not completing | Check that you’ve authenticated and have sufficient Firecrawl credits |
| Unexpected results | The agent may chain multiple tools (e.g., mapping first, then scraping). Review the agent’s reasoning to understand its approach. |
| Tool not available | Verify the tool is enabled in your agent’s MCP configuration |
Need Help?
- Agents documentation for setup and best practices
- Agent Node guide for workflow integration
- Gumloop Community for questions and examples
- Contact support@gumloop.com for assistance
Use this integration directly in Claude or Cursor. Connect remotely via the Firecrawl MCP server using your Gumloop credentials.
