Node Inputs
Required Fields
- Filter By: Main content to filter
- Value: The output you want to pass if the condition is met
- Condition: Natural language comparison rule Example: “Is the provided text in Spanish”
Optional Fields
- Output Blank Value: Return blanks for non-matches
- Temperature: Controls decision consistency (0-1)
- 0: More focused, consistent
- 1: More creative, varied
- Cache Response: Save responses for reuse
Show As Input
The node allows you to configure certain parameters as dynamic inputs. You can enable these in the “Configure Inputs” section:-
condition: String
- Natural language comparison rule
- Example: “Is the provided text in Spanish”
- Example: “Does the text contain pricing information”
-
output_blank_value: Boolean
- true/false to control what happens with non-matches
- When true, outputs blank for non-matching items
- When false, skips non-matching items entirely
-
model_preference: String
- Name of the AI model to use
- Accepted values: “Claude 3.7 Sonnet”, “Claude 3.5 Haiku”, “GPT-4.1”, “GPT-4.1 Mini”, etc.
-
Cache Response: Boolean
- true/false to enable/disable response caching
- Helps reduce API calls for identical inputs
-
Temperature: Number
- Value between 0 and 1
- Controls decision consistency
- Lower values (closer to 0) provide more consistent filtering results
Node Output
- Filtered Output: Values that meet your condition
Node Functionality
The AI Filter node:- Compares paired values
- Uses natural language rules
- Evaluates matching criteria
Available AI Models
- Claude 3.7 Sonnet
- Claude 3.5 Haiku
- OpenAI o3
- OpenAI o4-mini
- GPT-4.1
- GPT-4.1 Mini
- DeepSeek V3
- DeepSeek R1
- Gemini 2.5 Pro
- Gemini 2.5 Flash
- Grok 3
- Grok 3 Mini
- Azure OpenAI
- And more
Note: Auto-Select uses a third-party model routing service and automatically chooses the appropriate model for cost, performance, and availability. Not ideal if consistent model behavior is needed.
AI Model Selection Guide
When choosing an AI model for your task, consider these key factors:Model Type | Ideal Use Cases | Considerations |
---|---|---|
Standard Models | General content creation, basic Q&A, simple analysis | Lower cost, faster response time, good for most everyday tasks |
Advanced Models | Complex analysis, nuanced content, specialized knowledge domains | Better quality but higher cost, good balance of performance and efficiency |
Expert & Thinking-Enabled Models | Complex reasoning, step-by-step problem-solving, coding, detailed analysis, math problems, technical content | Highest quality but most expensive, best for complex and long-form tasks, longer response time |
- Task complexity and required accuracy
- Response time requirements
- Cost considerations
- Consistency needs across runs
- Specialized knowledge requirements
- Anthropic Models Overview
- Anthropic Extended Thinking Documentation
- OpenAI Reasoning Guide
- OpenAI o3 Models
Important Considerations
- Expert models (OpenAI o3) cost 30 credits, advanced models (GPT-4.1, Claude 3.7 & Grok 4) cost 20 credits, and standard models cost 2 credits per run
- You can drop the credit cost to 1 by providing your own API key under the credentials page
- Values and Filter By lists must match in length
- Write clear comparison conditions for accurate outputs
- This node relies heavily on AI model performance, which may vary depending on the complexity of your filtering conditions. For more reliable and consistent filtering:
- Use the Filter node for straightforward comparisons and exact matching
- Create a custom node for complex filtering logic that needs to be precise and deterministic