MetricSign
EN|NLRequest Access
High severityconfiguration

Power BI Refresh Error:
JOB_CLUSTER_POLICY_VIOLATION

What does this error mean?

A job attempted to start a cluster with a configuration that violates one or more rules in the assigned cluster policy, causing the cluster creation to be rejected.

Common causes

  • 1The job cluster specifies a node type, Spark version, or autoscale range that is outside the policy's allowed values
  • 2The policy was updated by an admin and the job configuration no longer complies with the new rules
  • 3A developer hardcoded cluster settings in the job JSON that override the policy defaults
  • 4The policy requires specific Spark or init script configurations that were omitted

How to fix it

  1. 1Step 1: Open the job definition and review the cluster configuration under the failing task.
  2. 2Step 2: In the Databricks workspace, navigate to Compute > Policies and view the rules of the policy assigned to the job.
  3. 3Step 3: Align the cluster configuration with the policy — remove or adjust any field that is fixed or bounded by the policy.
  4. 4Step 4: If the policy is too restrictive for legitimate workload needs, request a policy update from a workspace admin.
  5. 5Step 5: Save and re-run the job to confirm the cluster starts successfully.

Frequently asked questions

Who can modify cluster policies in Databricks?

Only workspace admins can create, modify, or delete cluster policies. Developers can view policies but cannot change them. If a policy needs updating for a legitimate workload, open a request with the workspace admin.

Can I use a different cluster policy for the same job in different environments?

Yes. Using Databricks Asset Bundles, you can define per-target overrides that assign a different policy_id for dev, staging, and production environments.

Other configuration errors