Skip to main content
Databricks is a unified analytics platform for data engineering, data science, and machine learning. The Databricks MCP server lets you manage clusters, run jobs, execute SQL, and query ML endpoints using natural language.

What Can It Do?

  • Manage clusters by listing, starting, and terminating on demand
  • Orchestrate jobs by triggering runs and fetching outputs
  • Run SQL on warehouses and return structured data
  • Query ML endpoints and vector indexes for AI workflows

Where to Use It

Add Databricks as a tool to any agent. The agent can then interact with your workspace conversationally, choosing the right actions based on context. To add an MCP tool to your agent:
  1. Open your agent’s configuration
  2. Click Add toolsConnect an app with MCP
  3. Search for the integration and select it
  4. Authenticate with your account
You can control which tools your agent has access to. After adding an integration, click on it to enable or disable specific tools based on what your agent needs.

In Workflows (Via Agent Node)

For automated pipelines, use an Agent Node with Databricks tools. This gives you the flexibility of an agent within a deterministic workflow.

As a Custom MCP Node

You can also create a standalone MCP node for a specific action. This generates a reusable node that performs one task, useful when you need the same operation repeatedly in workflows.
To create a custom MCP node:
  1. Go to your node library and search for the integration
  2. Click Create a node with AI
  3. Describe the specific action you want (e.g., “List all active clusters”)
  4. Test the node and save it for reuse
Custom MCP nodes are single-purpose by design. For tasks that require multiple steps or dynamic decision-making, use an agent instead.

Available Tools

ToolDescription
Get MeGet authenticated user information
List ClustersList all pinned and active clusters
Start ClusterStart a terminated cluster
Terminate ClusterTerminate a running cluster
List JobsList jobs with pagination
Run JobTrigger a new job run
Manage Job RunCancel or delete a job run
Get Job Run OutputGet output from a job run
Execute SQLRun SQL on a warehouse
List WarehousesList all SQL warehouses
Query Serving EndpointQuery a model serving endpoint
List Serving EndpointsList all serving endpoints
Query Vector IndexQuery a vector index
List Vector Search EndpointsList vector search endpoints

Example Prompts

Use these with your agent or in the Agent Node: Manage clusters:
List all my clusters and their current status
Start compute:
Start the cluster named "analytics-cluster"
Run a job:
Trigger the daily ETL job and return the run ID
Execute SQL:
Run "SELECT * FROM sales WHERE region = 'West'" on the main warehouse
Query ML endpoint:
Query the fraud-detection endpoint with this transaction data

Troubleshooting

IssueSolution
Agent not finding the right dataUse specific cluster or job names
Action not completingCheck that you’ve authenticated and have the necessary workspace permissions
Unexpected resultsThe agent may chain multiple tools (e.g., listing jobs first, then running one). Review the agent’s reasoning to understand its approach.
Tool not availableVerify the tool is enabled in your agent’s MCP configuration
Agents are smart enough to chain multiple API calls together. For example, asking “Run the ETL job” will find the job first, then trigger it. If results seem off, check the agent’s step-by-step reasoning.

Need Help?


Use this integration directly in Claude or Cursor. Connect remotely via the Databricks MCP server using your Gumloop credentials.