Quick Start
1
Add the Ask AI node to your workflow
Drag the Ask AI node from the node library into your canvas
2
Write your prompt
Enter clear, detailed instructions in the prompt field to guide the AI
3
Choose your AI model
Select the model that best fits your task complexity and budget
4
Connect and run
Connect inputs from other nodes by dragging output badges into your prompt
Node Configuration
Required Fields
Prompt
Prompt
The main instruction or question for the AI. Your prompt should be clear and detailed to get the best possible response.Example prompt formats:
More Options
Choose AI Model
Choose AI Model
Select from over 20 AI models including Claude, GPT, Gemini, and specialized reasoning models. See AI Model Selection Guide below for detailed recommendations.
Temperature (0-1)
Temperature (0-1)
Controls response creativity and consistency.
- 0: More focused and consistent responses
- 1 (default): More creative and varied outputs
Maximum Tokens
Maximum Tokens
Limits the total response length. Sets the upper bound for how long the AI’s response can be.
For Claude 3.7 Sonnet Thinking, this must be greater than your Thinking Tokens setting.
Cache Response
Cache Response
Saves responses for reuse when inputs remain constant.Caching works when ALL of these are identical:
- Prompt text (including any inserted input badges)
- Model selection
- Temperature setting
- Maximum tokens
- Thinking tokens (if applicable)
Thinking Tokens (Claude 3.7 Sonnet Thinking only)
Thinking Tokens (Claude 3.7 Sonnet Thinking only)
Sets a budget for the model’s internal reasoning process before generating the final response.Requirements:
- Minimum: 1,024 tokens
- Must be less than Maximum Tokens
- Recommended: 4,000-16,000 for complex tasks
MCP Server Connection
MCP Server Connection
Connect to a remote Model Context Protocol (MCP) server to extend the AI’s capabilities with custom tools and data sources.
Learn how to set up and use MCP servers with the Ask AI node in the Ask AI MCP Support documentation.
Dynamic Inputs (Show As Input)
You can configure certain parameters as dynamic inputs that can be set by previous nodes in your workflow:| Parameter | Type | Example Values |
|---|---|---|
| prompt | String | ”Summarize this article” |
| Model Preference | String | ”Claude 3.7 Sonnet”, “GPT-4.1”, “Gemini 2.5 Pro” |
| Temperature | Number | 0 to 1 |
| Maximum Tokens | Number | Any positive integer (e.g., 2000) |
| Thinking Tokens | Number | Minimum 1024 (Claude 3.7 Sonnet Thinking only) |
When enabled as inputs, these parameters can be dynamically set by previous nodes. If not enabled, the values set in the node configuration will be used.
Using Connected Node Data

1
Connect your nodes
Drag a connection line between the source node and your Ask AI node
2
Access outputs in the side menu
Available outputs from connected nodes appear automatically in the side menu
3
Drag outputs into your prompt
Simply drag the output badge from the side menu and drop it into your prompt field
4
Format around dynamic values
Add text before and after the output badges to create well-structured prompts
Claude 3.7 Sonnet: Standard vs. Thinking Mode
Claude 3.7 Sonnet is available in two modes optimized for different use cases:Standard Mode
Best for: Most everyday tasks
- Direct responses without extended reasoning
- Quick response time
- Excellent for creative content and analysis
- More efficient for straightforward tasks
- Lower cost per request
Thinking Mode
Best for: Complex problem-solving
- Additional internal reasoning before responding
- Higher quality answers for complex problems
- Ideal for multi-step logic and calculations
- Perfect for detailed code and debugging
- Only final response shown (thinking process is internal)
Learn more about Claude 3.7 Sonnet in Anthropic’s announcement.
Understanding Token Budgets
- Thinking Tokens
- Maximum Tokens
For Claude 3.7 Sonnet Thinking onlyBudget allocated for the model’s internal reasoning process:
- Must be less than Maximum Tokens
- Minimum: 1,024 tokens
- Recommended: 4,000-16,000 for complex tasks
- Larger budgets improve reasoning but increase cost and response time
- The model decides how much to use based on task complexity
Available AI Models
Gumloop supports 20+ leading AI models:View complete model list
View complete model list
- Claude 3.7 Sonnet
- Claude 3.7 Sonnet Thinking
- GPT 5
- GPT 5-mini
- GPT 5 nano
- OpenAI o3
- OpenAI o3 Deep Research
- OpenAI o4-mini
- OpenAI o4-mini Deep Research
- GPT-4.1
- GPT-4.1 Mini
- Grok 4
- DeepSeek V3
- DeepSeek R1
- Perplexity Sonar Reasoning
- Perplexity Sonar Reasoning Pro
- Gemini 2.5 Pro
- Gemini 2.5 Flash
- Grok 4
- Grok 4 Mini
- Azure OpenAI
- And more…
Deep Research Models
OpenAI O3 Deep Research
Premium deep research capabilities for the most demanding analytical tasks.
OpenAI O4 Mini Deep Research
Cost-effective deep research for thorough analysis on a budget.
Deep Research models perform comprehensive, multi-step reasoning and investigation. They’re specifically designed for queries that require thorough research, fact-checking, and synthesizing information from multiple angles.
AI Model Selection Guide
Choose the right model based on your task requirements:| Model Type | Ideal Use Cases | Key Considerations |
|---|---|---|
| Standard Models | General content creation, basic Q&A, simple analysis | ✓ Lower cost ✓ Faster response time ✓ Good for everyday tasks |
| Advanced Models | Complex analysis, nuanced content, specialized knowledge | ✓ Better quality ✓ Higher cost ✓ Good balance of performance and efficiency |
| Expert & Thinking Models | Complex reasoning, coding, detailed analysis, math problems | ✓ Highest quality ✓ Most expensive ✓ Best for complex tasks ✓ Longer response time |
| Deep Research Models | Comprehensive investigation, fact-checking, multi-source analysis | ✓ Thorough research capability ✓ Multi-step investigation ✓ Best for research-intensive tasks ✓ Longest response time |
About Auto-Select: Uses a third-party model routing service that automatically chooses models based on cost, performance, and availability. Not ideal when consistent model behavior is required.
When to Use Deep Research Models
Deep Research models (OpenAI O3 Deep Research and O4 Mini Deep Research) are designed for tasks that require comprehensive investigation and analysis:- Ideal For
- Not Ideal For
- How It Works
Perfect use cases for Deep Research:
- Market Research: Analyzing industry trends, competitor landscapes, and market opportunities
- Due Diligence: Investigating companies, technologies, or business proposals
- Fact-Checking: Verifying claims across multiple sources and perspectives
- Literature Review: Synthesizing information from multiple documents or sources
- Competitive Analysis: Deep comparison of products, services, or strategies
- Complex Report Generation: Creating comprehensive reports that require thorough investigation
- Multi-Perspective Analysis: Examining topics from different angles and viewpoints
Deep Research Model Comparison
OpenAI O3 Deep Research
Best for: Most demanding research tasks
- Highest quality deep research
- Most comprehensive analysis
- Best for critical business decisions
- Longer processing time
OpenAI O4 Mini Deep Research
Best for: Fasterresearch
- Good quality deep research
- Faster than O3 Deep Research
- Suitable for routine research tasks
Additional Selection Factors
Consider these factors when choosing a model:- Task complexity and required accuracy
- Response time requirements
- Cost considerations
- Consistency needs across runs
- Specialized knowledge requirements
- Need for comprehensive investigation vs. quick answers
For more information on advanced AI models:
Node Output
Response: The AI’s generated answer or output based on your prompt and configured parameters.Common Use Cases
Content Creation
Content Creation
Data Analysis
Data Analysis
Customer Support
Customer Support
Research & Investigation (with Deep Research)
Research & Investigation (with Deep Research)
Loop Mode Pattern
When processing multiple items in Loop Mode, the Ask AI node analyzes each item individually:In Loop Mode, your workflow runs once for each item in the input list, allowing batch processing of multiple documents, queries, or data points.
Credit Costs
Understanding credit usage helps you optimize workflow costs:| Model Category | Credits per Run | With API Key |
|---|---|---|
| Expert Models (OpenAI o3, Claude 3.7 Thinking) | 30 credits | 1 credit |
| Advanced Models (GPT-4.1, Claude 3.7) | 20 credits | 1 credit |
| Standard Models | 2 credits | 1 credit |
Configure your own API keys in the credentials page to reduce credit costs to 1 per run for that specific provider’s models.
Important Considerations
Function Calling
Function Calling
The ‘Use Function’ option enables structured output formatting and is only available for OpenAI models.
Learn more in the OpenAI Function Calling Documentation.
Model Selection Strategy
Model Selection Strategy
Consider task complexity when selecting models. For reasoning-heavy tasks, consider thinking-enabled or specialized reasoning models. For straightforward content generation, standard models are often sufficient and more cost-effective.
Working with Connected Nodes
Working with Connected Nodes
- Drag output badges from the side menu directly into your prompt
- Format text around badges for better prompting
- All outputs from connected nodes appear in the side menu
- No need for separate Combine Text nodes
Multimodal Content
Multimodal Content
The Ask AI node is text-based only:
- To analyze images, use the Analyze Image node
- To create images, use the Generate Image node
The Ask AI node is your interface to leading AI models, helping you automate text processing and generation tasks with customizable control over output style and format. With Gumloop’s improved UI, you can easily incorporate data from connected nodes directly into your prompts, creating powerful automated workflows without complex configuration.