Skip to main content
This document explains the Categorizer node, which uses AI to classify text into custom categories.
AI Model Fallback settings

Node Inputs

Required Fields

  • Input: Text to categorize
  • Categories: Define your classification groups:
    • Category Name: Label for the category
    • Category Description: Explain what belongs in this category

Optional Fields

  • Include Justification: Get AI’s reasoning for selections
  • Additional Context: Extra guidance for categorization
  • Temperature: Controls AI decision-making (0-1)
    • 0: More focused, consistent
    • 1: More creative, varied
  • Cache Response: Save responses for reuse

Show As Input

The node allows you to configure certain parameters as dynamic inputs. You can enable these in the “Configure Inputs” section:
  • include_justification: Boolean
    • true/false to include explanation for category assignment
  • Additional Context: String
    • Extra information to guide the categorization process
    • Example: “These items are different types of software bugs”
  • model_preference: String
    • Name of the AI model to use
    • Accepted values: “Claude 4.5 Sonnet”, “Claude 4.5 Haiku”, “GPT-5”, “GPT-4.1”, etc.
  • Cache Response: Boolean
    • true/false to enable/disable response caching
    • Helps reduce API calls for identical inputs
  • Temperature: Number
    • Value between 0 and 1
    • Controls categorization consistency
When enabled as inputs, these parameters can be dynamically set by previous nodes in your workflow. If not enabled, the values set in the node configuration will be used.

AI Model Fallback

Under Show More Options, configure automatic fallback when your selected AI model is unavailable. Fallback is enabled by default. When an error occurs (rate limits, provider outages, timeouts), the system retries based on severity, then falls back to the next model. Fallback models are always from different providers for true redundancy.
Error TypeRetries Before Fallback
Rate Limit2
Provider 5xx1
Network Error0 (immediate)
Timeout1
Default (Auto): System auto-selects fallbacks based on your primary model:
  • Expert → Claude Opus 4.5 → Gemini 3 Pro → GPT-5.2
  • Fastest → Gemini 3 Flash → Claude Haiku 4.5 → GPT-4.1
  • Recommended → Claude Sonnet 4.5 → Gemini 3 Flash → GPT-5.2
Override: Enable to manually select up to 2 fallback models with drag-and-drop priority.
Disabling fallback means your node will fail if the primary model is unavailable.

Node Output

  • Selected Category: Chosen category name
  • Justification: AI’s reasoning (if enabled)

Node Functionality

The Categorizer node:
  • Analyzes input text
  • Matches to best category
  • Provides reasoning (optional)
  • Handles batch processing
  • Supports custom categories

Available AI Models

TierModels
ExpertGPT-5.2, GPT-5.1, GPT-5, OpenAI o3, Claude 4.5/4.1/4 Opus, Claude 3.7 Sonnet Thinking, Gemini 3 Pro, Grok 4
AdvancedGPT-4.1, OpenAI o4-mini, Claude 4.5/4/3.7 Sonnet, Gemini 2.5 Pro, Grok 3, Perplexity Sonar Pro, LLaMA 3 405B
StandardGPT-4.1 Mini/Nano, GPT-5 Mini/Nano, Claude 4.5 Haiku, Gemini 3/2.5 Flash, Grok 3 Mini, DeepSeek V3/R1, Mixtral 8x7B
SpecialAuto-Select, Azure OpenAI (requires credentials)
Auto-Select uses third-party routing to choose models based on cost and performance. Not ideal when consistent behavior is required.

AI Model Selection Guide

When choosing an AI model for your task, consider these key factors:
Model TypeIdeal Use CasesConsiderations
Standard ModelsGeneral content creation, basic Q&A, simple analysisLower cost, faster response time, good for most everyday tasks
Advanced ModelsComplex analysis, nuanced content, specialized knowledge domainsBetter quality but higher cost, good balance of performance and efficiency
Expert & Thinking-Enabled ModelsComplex reasoning, step-by-step problem-solving, coding, detailed analysis, math problems, technical contentHighest quality but most expensive, best for complex and long-form tasks, longer response time
Additional selection factors:
  • Task complexity and required accuracy
  • Response time requirements
  • Cost considerations
  • Consistency needs across runs
  • Specialized knowledge requirements
For more detailed information on AI models with advanced reasoning capabilities, you can refer to:

Example Use Cases

  1. Sentiment Analysis:
Categories:
- Positive: "Expresses satisfaction or approval"
- Negative: "Shows dissatisfaction or criticism"
- Neutral: "States facts without emotion"
  1. Support Tickets:
Categories:
- Bug Report: "Technical issues or errors"
- Feature Request: "New functionality suggestions"
- Account Issue: "Login or access problems"
  1. Content Classification:
Categories:
- News: "Current events and reporting"
- Opinion: "Personal views and analysis"
- Tutorial: "How-to guides and instructions"

Loop Mode

Input: List of customer feedback
Process: Categorize each item
Output: Category per item + justifications

Important Considerations

  1. Expert models (OpenAI o3) cost 30 credits, advanced models (GPT-4.1, Claude 3.7 & Grok 4) cost 20 credits, and standard models cost 2 credits per run
  2. You can drop the credit cost to 1 by providing your own API key under the credentials page
  3. Write clear category descriptions for accurate outputs
  4. Enable justification for important decisions
  5. Use additional context for complex rules

Additional Information

Video Tutorial In summary, the Categorizer node helps organize text into meaningful groups using AI, with optional explanations for each decision.