This document explains the Summarizer node, which condenses large texts into concise summaries using AI.Documentation Index
Fetch the complete documentation index at: https://docs.gumloop.com/llms.txt
Use this file to discover all available pages before exploring further.
Node Inputs
Required Fields
- Text: Content to summarize (documents, articles, etc.)
Optional Fields in More Options
- Choose AI Model: Select your preferred AI model
- Temperature: Controls summary style (0-1)
- 0: More focused, factual
- 1: More creative, varied
- Cache Response: Save responses for reuse
- Prompt Template: Customize how the summary is requested
Show As Input
The node allows you to configure certain parameters as dynamic inputs. You can enable these in the “Configure Inputs” section:-
model_preference: String
- Name of the AI model to use
- Accepted values: “Claude 4.5 Sonnet”, “Claude 4.5 Haiku”, “GPT-5”, “GPT-4.1”, etc.
-
Cache Response: Boolean
- true/false to enable/disable response caching
- Helps reduce API calls for identical inputs
-
Temperature: Number
- Value between 0 and 1
- Controls extraction consistency
Node Output
- Summary: Condensed version of input text
Available AI Models
| Tier | Models |
|---|---|
| Expert | GPT-5.2, GPT-5.1, GPT-5, OpenAI o3, Claude 4.5/4.1/4 Opus, Claude 3.7 Sonnet Thinking, Gemini 3 Pro, Grok 4 |
| Advanced | GPT-4.1, OpenAI o4-mini, Claude 4.5/4/3.7 Sonnet, Gemini 2.5 Pro, Grok 3, Perplexity Sonar Pro, LLaMA 3 405B |
| Standard | GPT-4.1 Mini/Nano, GPT-5 Mini/Nano, Claude 4.5 Haiku, Gemini 3/2.5 Flash, Grok 3 Mini, DeepSeek V3/R1, Mixtral 8x7B |
| Special | Auto-Select, Azure OpenAI (requires credentials) |
Auto-Select uses third-party routing to choose models based on cost and performance. Not ideal when consistent behavior is required.
AI Model Selection Guide
When choosing an AI model for your task, consider these key factors:| Model Type | Ideal Use Cases | Considerations |
|---|---|---|
| Standard Models | General content creation, basic Q&A, simple analysis | Lower cost, faster response time, good for most everyday tasks |
| Advanced Models | Complex analysis, nuanced content, specialized knowledge domains | Better quality but higher cost, good balance of performance and efficiency |
| Expert & Thinking-Enabled Models | Complex reasoning, step-by-step problem-solving, coding, detailed analysis, math problems, technical content | Highest quality but most expensive, best for complex and long-form tasks, longer response time |
- Task complexity and required accuracy
- Response time requirements
- Cost considerations
- Consistency needs across runs
- Specialized knowledge requirements
- Anthropic Models Overview
- Anthropic Extended Thinking Documentation
- OpenAI Reasoning Guide
- OpenAI o3 Models
Common Use Cases
- Document Summarization:
- News Digests:
- Meeting Notes:
Loop Mode Pattern
Important Considerations
- Expert models (OpenAI o3) cost 30 credits, advanced models (GPT-4.1 & Claude 3.7) cost 20 credits, and standard models cost 2 credits per run
- You can drop the credit cost to 1 by providing your own API key under the credentials page
- Lower temperature (0-0.3) for factual summaries
