Overview
This suite of enterprise features includes three key capabilities:- AI Model Access Control: Restrict which AI models organization members can use
- Organization Credentials: Centrally manage API keys and authentication credentials
- AI Proxy Routing: Route AI provider requests through custom proxy URLs
AI Model Access Control
Overview
AI Model Access Control allows organization administrators to restrict which AI models are available to organization members. This feature provides granular control over AI usage, cost management, and compliance requirements.Accessing Model Access Control
Navigate to: gumloop.com/settings/organization/ai-model-access-control
Key Features
- Allow/Deny Lists: Configure either an allow list (only selected models available) or deny list (selected models blocked)
- Provider-Level Control: Select/deselect all models from specific AI providers at once
- Fallback Model: Automatically designate a fallback model when restricted models are requested
Supported AI Providers
The system supports model restrictions across all integrated AI providers:- OpenAI (GPT models)
- Anthropic (Claude models)
- Google (Gemini models)
- DeepSeek models
- Azure OpenAI
- Grok
- And other integrated providers
Configuration Process
1. Enable Model Restrictions

- Toggle “restrict models?” switch to enable access control
- When first enabled, all models are selected by default (allow list mode)
2. Choose Control Mode
Allow List Mode: Only selected models are available to members- Best for strict control environments
- Users can only access explicitly permitted models
- Recommended for compliance-heavy organizations
- Best for general restrictions with specific exclusions
- Users can access all models except those explicitly blocked
- Recommended for cost control scenarios
3. Select Models by Provider

- Browse models organized by AI provider
- Use provider-level checkboxes to select/deselect all models from a provider
- Individual model selection available within each provider
4. Configure Fallback Model
- System automatically selects first available model as fallback
- Used when a restricted model is requested in workflows
- Ensures workflows continue to function even when preferred models are restricted

Common Use Cases
Compliance Requirements:- Enable allow list mode
- Only permit models that meet data residency requirements
- Block models that don’t comply with industry regulations
- Allow only fast-response models for real-time workflows
- Block slower but more capable models for batch processing
Organization Credentials
Overview
Organization Credentials enables centralized management of AI model API keys at the organization level. When configured, these credentials automatically override both personal and workspace credentials, ensuring consistent access and simplified management across your entire organization.Accessing Organization Credentials
Navigate to: gumloop.com/settings/organization/credentials
Key Features
- Centralized Management: Single point of control for all organization AI API keys
- Automatic Override: Organization credentials take precedence over both personal and workspace credentials
Supported AI Providers
Organization credentials are currently available for the following AI providers:Provider | API Key Type | Description |
---|---|---|
OpenAI | API Key | Access to GPT models and other OpenAI services |
Anthropic | API Key | Access to Claude models |
Perplexity | API Key | Access to Perplexity AI models |
XAI | API Key | Access to Grok models |
- When an organization credential exists for an AI provider, it will always be used
- Workspace credentials are ignored when organization credentials are present
- Personal credentials are only used when neither organization nor workspace credentials exist
- This ensures consistent billing and access across all organization workflows
Configuration Process
1. Add New AI Credential

- Click “Add Credential” button
- Select from available AI providers: OpenAI, Anthropic, Perplexity, or XAI
- Choose “API Key” as the credential type
2. Configure API Key
- Enter your organization’s API key for the selected provider
- Set any provider-specific configuration options
Benefits
Unified Access Management:- All organization members automatically use organization API keys
- Eliminates need for individual API key management across teams
- Ensures consistent access across all workflows regardless of workspace
- Centralized billing for all AI API usage
- All AI requests use organization’s negotiated rates and limits
- Easier tracking of consumption across teams and projects
- Simplified budget management and forecasting
- Ensure all AI calls use audited and compliant credentials
- Maintain consistent security policies across workflows
- Prevent unauthorized AI usage through personal credentials
AI Proxy Routing
Overview
AI Proxy Routing allows enterprise organizations to route AI provider requests through custom proxy URLs. This enables organizations to use their own AI infrastructure, implement custom security policies, or integrate with specialized AI gateways.Key Features
- Custom Proxy URLs: Route requests through organization-controlled proxy servers
- Model Name Mapping: Map Gumloop model names to custom proxy model identifiers
- Provider-Specific Configuration: Configure different proxies for different AI providers
Configuration Process
1. Access Proxy Configuration

- Navigate to organization credentials page
- Find AI provider credential (OpenAI, Anthropic, etc.)
- Click “Configure Proxy” button (only visible for AI providers)
2. Set Proxy URL
- Enter custom proxy base URL (e.g.,
https://your-proxy.company.com
) - URL will replace the default AI provider endpoint
- Must be accessible from Gumloop infrastructure
3. Configure Model Mappings (Optional)

- Map Gumloop model names to proxy-specific model identifiers
- Example: Map
gpt-4
tocustom-gpt-4-enterprise
- Useful when proxy uses non-standard model names
Supported Scenarios
Enterprise AI Gateways:- Route through corporate AI management platforms
- Implement centralized logging, monitoring, and cost tracking
- Apply consistent security policies across all AI usage
- Access organization-fine-tuned models through custom endpoints
- Map standard model names to specialized variants
- Maintain compatibility with existing workflows
- Ensure all AI requests stay within specific geographic regions
- Route through compliant proxy infrastructure
- Maintain audit trails for regulatory requirements
Configuration Examples
Basic Proxy Setup
Custom Model Names
Regional Compliance
How These Features Work Together
When a workflow executes that requires AI:- Model Selection: AI Model Access Control determines which models are available
- Credential Resolution: Organization Credentials provide the authentication
- Request Routing: AI Proxy Routing determines the endpoint and model mapping
- Execution: Request is sent through the configured proxy with organization credentials
Security and Compliance
Access Control
- All features require organization admin privileges
Data Security
- Encrypted storage for all credentials and configuration
- Secure transmission of proxy configurations
- Audit logging for all administrative actions