This document explains the Summarizer node, which condenses large texts into concise summaries using AI.

Node Inputs

Required Fields

  • Text: Content to summarize (documents, articles, etc.)

Optional Fields in More Options

  • Choose AI Model: Select your preferred AI model
  • Temperature: Controls summary style (0-1)
    • 0: More focused, factual
    • 1: More creative, varied
  • Cache Response: Save responses for reuse
  • Prompt Template: Customize how the summary is requested

Show As Input

The node allows you to configure certain parameters as dynamic inputs. You can enable these in the “Configure Inputs” section:

  • model_preference: String

    • Name of the AI model to use
    • Accepted values: “Claude 3.7 Sonnet”, “Claude 3 Haiku”, “GPT-4o”, “GPT-4o Mini”, etc.
  • Cache Response: Boolean

    • true/false to enable/disable response caching
    • Helps reduce API calls for identical inputs
  • Temperature: Number

    • Value between 0 and 1
    • Controls extraction consistency

When enabled as inputs, these parameters can be dynamically set by previous nodes in your workflow. If not enabled, the values set in the node configuration will be used.

Node Output

  • Summary: Condensed version of input text

Available AI Models

  • Claude 3.7 Sonnet
  • Claude 3.7 Sonnet Thinking (extended reasoning capabilities)
  • Claude 3.5 Haiku
  • OpenAI o1
  • OpenAI o3 mini
  • GPT-4o
  • GPT-4o Mini
  • DeepSeek V3
  • DeepSeek R1
  • Perplexity Sonar Reasoning
  • Perplexity Sonar Reasoning Pro
  • Gemini 2.0 Flash
  • Grok 2
  • Azure OpenAI
  • And more

Note: Auto-Select uses a third-party model routing service and automatically chooses the appropriate model for cost, performance, and availability. Not ideal if consistent model behavior is needed.

AI Model Selection Guide

When choosing an AI model for your task, consider these key factors:

Model TypeIdeal Use CasesConsiderations
Standard ModelsGeneral content creation, basic Q&A, simple analysisLower cost, faster response time, good for most everyday tasks
Advanced ModelsComplex analysis, nuanced content, specialized knowledge domainsBetter quality but higher cost, good balance of performance and efficiency
Expert & Thinking-Enabled ModelsComplex reasoning, step-by-step problem-solving, coding, detailed analysis, math problems, technical contentHighest quality but most expensive, best for complex and long-form tasks, longer response time

Additional selection factors:

  • Task complexity and required accuracy
  • Response time requirements
  • Cost considerations
  • Consistency needs across runs
  • Specialized knowledge requirements

For more detailed information on AI models with advanced reasoning capabilities, you can refer to:

Common Use Cases

  1. Document Summarization:
Input: Long research papers
Output: Key findings and conclusions
  1. News Digests:
Input: Multiple news articles
Output: Brief overviews
  1. Meeting Notes:
Input: Meeting transcripts
Output: Key points and action items

Loop Mode Pattern

Input: List of documents
Process: Summarize each independently
Result: List of summaries

Important Considerations

  1. Expert models (OpenAI o1) cost 30 credits, advanced models (GPT-4o & Claude 3.7) cost 20 credits, and standard models cost 2 credits per run
  2. You can drop the credit cost to 1 by providing your own API key under the credentials page
  3. Lower temperature (0-0.3) for factual summaries

In summary, the Summarizer node is your tool for converting long texts into concise, readable summaries while preserving essential information.