MetricSign
EN|NLRequest Access
High severitycluster

Power BI Refresh Error:
SPARK_STARTUP_FAILURE

What does this error mean?

The Spark driver failed to start within the configured startup timeout (approximately 200 seconds). The cluster could not initialize and any jobs waiting for it were terminated.

Common causes

  • 1The driver node ran out of memory during Spark context initialization
  • 2A corrupt or failing init script prevented Spark from starting
  • 3A library conflict or incompatible dependency version in the cluster configuration
  • 4The Databricks Runtime version has a known startup bug
  • 5The driver instance type is too small for the cluster configuration

How to fix it

  1. 1Open the cluster event log and look at the driver logs around the startup failure time
  2. 2Check for init script errors — disable all init scripts and retry to isolate the cause
  3. 3Review installed libraries for version conflicts, especially between Python packages and the DBR version
  4. 4Try a fresh cluster without any custom libraries or init scripts to confirm the base image starts
  5. 5Increase the driver node size if memory pressure during startup is indicated in the logs
  6. 6Try a different Databricks Runtime version to rule out a DBR-specific issue

Frequently asked questions

Can I see the Spark startup logs?

Yes — open the cluster in the Databricks UI, go to Event Log, then click on the failed startup event to access driver logs.

How long does Databricks wait before declaring SPARK_STARTUP_FAILURE?

Approximately 200 seconds by default.

Other cluster errors