Skip to main content
The Ask AI node lets you interact with AI models to process text and generate responses. Connect it to other nodes to create powerful automated workflows that leverage the latest AI capabilities.

Quick Start

1

Add the Ask AI node to your workflow

Drag the Ask AI node from the node library into your canvas
2

Write your prompt

Enter clear, detailed instructions in the prompt field to guide the AI
3

Choose your AI model

Select the model that best fits your task complexity and budget
4

Connect and run

Connect inputs from other nodes by dragging output badges into your prompt

Node Configuration

Required Fields

The main instruction or question for the AI. Your prompt should be clear and detailed to get the best possible response.Example prompt formats:
Analyze this website content and provide a one-page summary:

[drag Website Scraper badge here]
Write a professional email response to:
[drag Customer Query badge here]

More Options

Select from over 20 AI models including Claude, GPT, Gemini, and specialized reasoning models. See AI Model Selection Guide below for detailed recommendations.
Controls response creativity and consistency.
  • 0: More focused and consistent responses
  • 1 (default): More creative and varied outputs
Use lower temperatures for factual tasks, higher for creative content.
Limits the total response length. Sets the upper bound for how long the AI’s response can be.
For Claude 3.7 Sonnet Thinking, this must be greater than your Thinking Tokens setting.
Saves responses for reuse when inputs remain constant.Caching works when ALL of these are identical:
  • Prompt text (including any inserted input badges)
  • Model selection
  • Temperature setting
  • Maximum tokens
  • Thinking tokens (if applicable)
Perfect for testing workflows or handling repeated queries.
Sets a budget for the model’s internal reasoning process before generating the final response.Requirements:
  • Minimum: 1,024 tokens
  • Must be less than Maximum Tokens
  • Recommended: 4,000-16,000 for complex tasks
Larger budgets improve reasoning quality but increase cost and response time.
Connect to a remote Model Context Protocol (MCP) server to extend the AI’s capabilities with custom tools and data sources.
Learn how to set up and use MCP servers with the Ask AI node in the Ask AI MCP Support documentation.

Dynamic Inputs (Show As Input)

You can configure certain parameters as dynamic inputs that can be set by previous nodes in your workflow:
ParameterTypeExample Values
promptString”Summarize this article”
Model PreferenceString”Claude 3.7 Sonnet”, “GPT-4.1”, “Gemini 2.5 Pro”
TemperatureNumber0 to 1
Maximum TokensNumberAny positive integer (e.g., 2000)
Thinking TokensNumberMinimum 1024 (Claude 3.7 Sonnet Thinking only)
When enabled as inputs, these parameters can be dynamically set by previous nodes. If not enabled, the values set in the node configuration will be used.

Using Connected Node Data

Ask AI node example showing connected data Gumloop’s interface makes it simple to incorporate data from other nodes:
1

Connect your nodes

Drag a connection line between the source node and your Ask AI node
2

Access outputs in the side menu

Available outputs from connected nodes appear automatically in the side menu
3

Drag outputs into your prompt

Simply drag the output badge from the side menu and drop it into your prompt field
4

Format around dynamic values

Add text before and after the output badges to create well-structured prompts

Claude 3.7 Sonnet: Standard vs. Thinking Mode

Claude 3.7 Sonnet is available in two modes optimized for different use cases:

Standard Mode

Best for: Most everyday tasks
  • Direct responses without extended reasoning
  • Quick response time
  • Excellent for creative content and analysis
  • More efficient for straightforward tasks
  • Lower cost per request

Thinking Mode

Best for: Complex problem-solving
  • Additional internal reasoning before responding
  • Higher quality answers for complex problems
  • Ideal for multi-step logic and calculations
  • Perfect for detailed code and debugging
  • Only final response shown (thinking process is internal)
Learn more about Claude 3.7 Sonnet in Anthropic’s announcement.

Understanding Token Budgets

  • Thinking Tokens
  • Maximum Tokens
For Claude 3.7 Sonnet Thinking onlyBudget allocated for the model’s internal reasoning process:
  • Must be less than Maximum Tokens
  • Minimum: 1,024 tokens
  • Recommended: 4,000-16,000 for complex tasks
  • Larger budgets improve reasoning but increase cost and response time
  • The model decides how much to use based on task complexity

Available AI Models

Gumloop supports 20+ leading AI models:
  • Claude 3.7 Sonnet
  • Claude 3.7 Sonnet Thinking
  • GPT 5
  • GPT 5-mini
  • GPT 5 nano
  • OpenAI o3
  • OpenAI o3 Deep Research
  • OpenAI o4-mini
  • OpenAI o4-mini Deep Research
  • GPT-4.1
  • GPT-4.1 Mini
  • Grok 4
  • DeepSeek V3
  • DeepSeek R1
  • Perplexity Sonar Reasoning
  • Perplexity Sonar Reasoning Pro
  • Gemini 2.5 Pro
  • Gemini 2.5 Flash
  • Grok 4
  • Grok 4 Mini
  • Azure OpenAI
  • And more…

Deep Research Models

OpenAI O3 Deep Research

Premium deep research capabilities for the most demanding analytical tasks.

OpenAI O4 Mini Deep Research

Cost-effective deep research for thorough analysis on a budget.
Deep Research models perform comprehensive, multi-step reasoning and investigation. They’re specifically designed for queries that require thorough research, fact-checking, and synthesizing information from multiple angles.

AI Model Selection Guide

Choose the right model based on your task requirements:
Model TypeIdeal Use CasesKey Considerations
Standard ModelsGeneral content creation, basic Q&A, simple analysis✓ Lower cost
✓ Faster response time
✓ Good for everyday tasks
Advanced ModelsComplex analysis, nuanced content, specialized knowledge✓ Better quality
✓ Higher cost
✓ Good balance of performance and efficiency
Expert & Thinking ModelsComplex reasoning, coding, detailed analysis, math problems✓ Highest quality
✓ Most expensive
✓ Best for complex tasks
✓ Longer response time
Deep Research ModelsComprehensive investigation, fact-checking, multi-source analysis✓ Thorough research capability
✓ Multi-step investigation
✓ Best for research-intensive tasks
✓ Longest response time
About Auto-Select: Uses a third-party model routing service that automatically chooses models based on cost, performance, and availability. Not ideal when consistent model behavior is required.

When to Use Deep Research Models

Deep Research models (OpenAI O3 Deep Research and O4 Mini Deep Research) are designed for tasks that require comprehensive investigation and analysis:
  • Ideal For
  • Not Ideal For
  • How It Works
Perfect use cases for Deep Research:
  • Market Research: Analyzing industry trends, competitor landscapes, and market opportunities
  • Due Diligence: Investigating companies, technologies, or business proposals
  • Fact-Checking: Verifying claims across multiple sources and perspectives
  • Literature Review: Synthesizing information from multiple documents or sources
  • Competitive Analysis: Deep comparison of products, services, or strategies
  • Complex Report Generation: Creating comprehensive reports that require thorough investigation
  • Multi-Perspective Analysis: Examining topics from different angles and viewpoints

Deep Research Model Comparison

OpenAI O3 Deep Research

Best for: Most demanding research tasks
  • Highest quality deep research
  • Most comprehensive analysis
  • Best for critical business decisions
  • Longer processing time

OpenAI O4 Mini Deep Research

Best for: Fasterresearch
  • Good quality deep research
  • Faster than O3 Deep Research
  • Suitable for routine research tasks

Additional Selection Factors

Consider these factors when choosing a model:
  • Task complexity and required accuracy
  • Response time requirements
  • Cost considerations
  • Consistency needs across runs
  • Specialized knowledge requirements
  • Need for comprehensive investigation vs. quick answers

Node Output

Response: The AI’s generated answer or output based on your prompt and configured parameters.

Common Use Cases

Prompt: "Write a blog post about [drag Topic input badge here]"
Perfect for generating articles, social media posts, marketing copy, and other written content at scale.
Prompt: "Analyze these sales figures and provide key insights:
[drag Sales Data input badge here]"
Extract insights, identify trends, and generate summaries from structured or unstructured data.
Prompt: "Answer this customer question professionally according to our company policies:
Customer Question: [drag Customer Query input badge here]"
Automate responses to common questions while maintaining brand voice and policy compliance.
Prompt: "Research the competitive landscape for SaaS project management tools, including:
- Top 5 competitors
- Their pricing models
- Key differentiating features
- Market positioning
[drag Market Segment input badge here]"
Model: OpenAI O3 Deep Research or O4 Mini Deep Research
Use Deep Research models when you need comprehensive investigation, fact-checking across multiple angles, or thorough analysis of complex topics.

Loop Mode Pattern

When processing multiple items in Loop Mode, the Ask AI node analyzes each item individually:
Input: List of articles
Prompt: "Analyze and find key patterns in this article: [drag Current Article output badge here]"
Result: Analysis generated for each article in the list
In Loop Mode, your workflow runs once for each item in the input list, allowing batch processing of multiple documents, queries, or data points.

Credit Costs

Understanding credit usage helps you optimize workflow costs:
Model CategoryCredits per RunWith API Key
Expert Models
(OpenAI o3, Claude 3.7 Thinking)
30 credits1 credit
Advanced Models
(GPT-4.1, Claude 3.7)
20 credits1 credit
Standard Models2 credits1 credit
Configure your own API keys in the credentials page to reduce credit costs to 1 per run for that specific provider’s models.

Important Considerations

The ‘Use Function’ option enables structured output formatting and is only available for OpenAI models.
Consider task complexity when selecting models. For reasoning-heavy tasks, consider thinking-enabled or specialized reasoning models. For straightforward content generation, standard models are often sufficient and more cost-effective.
  • Drag output badges from the side menu directly into your prompt
  • Format text around badges for better prompting
  • All outputs from connected nodes appear in the side menu
  • No need for separate Combine Text nodes
The Ask AI node is text-based only:

The Ask AI node is your interface to leading AI models, helping you automate text processing and generation tasks with customizable control over output style and format. With Gumloop’s improved UI, you can easily incorporate data from connected nodes directly into your prompts, creating powerful automated workflows without complex configuration.