Using AI
Summarizer
This document explains the Summarizer node, which condenses large texts into concise summaries using AI.
Node Inputs
Required Fields
- Text: Content to summarize (documents, articles, etc.)
Optional Fields in More Options
- Choose AI Model: Select your preferred AI model
- Temperature: Controls summary style (0-1)
- 0: More focused, factual
- 1: More creative, varied
- Cache Response: Save responses for reuse
- Prompt Template: Customize how the summary is requested
Show As Input
The node allows you to configure certain parameters as dynamic inputs. You can enable these in the “Configure Inputs” section:
-
model_preference: String
- Name of the AI model to use
- Accepted values: “Claude 3.5 Sonnet”, “Claude 3 Haiku”, “GPT-4o”, “GPT-4o Mini”, etc.
-
Cache Response: Boolean
- true/false to enable/disable response caching
- Helps reduce API calls for identical inputs
-
Temperature: Number
- Value between 0 and 1
- Controls extraction consistency
When enabled as inputs, these parameters can be dynamically set by previous nodes in your workflow. If not enabled, the values set in the node configuration will be used.
Node Output
- Summary: Condensed version of input text
Available AI Models
- Claude 3.5 Sonnet
- Claude 3 Haiku
- OpenAI o1
- OpenAI o1 mini
- GPT-4o
- GPT-4o Mini
- DeepSeek V3
- DeepSeek R1
- Gemini 1.5 Pro/Flash
- And more
Common Use Cases
- Document Summarization:
- News Digests:
- Meeting Notes:
Loop Mode Pattern
Important Considerations
- Expert models (OpenAI o1) cost 30 credits, advanced models (GPT-4o & Claude 3.5) cost 20 credits, and standard models cost 2 credits per run
- You can drop the credit cost to 1 by providing your own API key under the credentials page
- Lower temperature (0-0.3) for factual summaries
In summary, the Summarizer node is your tool for converting long texts into concise, readable summaries while preserving essential information.