This document explains the Website Crawler node, which gathers all links from a website by traversing its pages.
URL: Starting web address
Example: “https://www.gumloop.com/”
Depth: How many layers to crawl (1-3)
You can expose these fields as inputs:
The Website Crawler node:
https://
In summary, the Website Crawler node helps map website structures by systematically collecting links, with controls for depth and domain scope.
This document explains the Website Crawler node, which gathers all links from a website by traversing its pages.
URL: Starting web address
Example: “https://www.gumloop.com/”
Depth: How many layers to crawl (1-3)
You can expose these fields as inputs:
The Website Crawler node:
https://
In summary, the Website Crawler node helps map website structures by systematically collecting links, with controls for depth and domain scope.