High severitycluster
Power BI Refresh Error:
SPARK_STARTUP_FAILURE
What does this error mean?
The Spark driver failed to start within the configured startup timeout (approximately 200 seconds). The cluster could not initialize and any jobs waiting for it were terminated.
Common causes
- 1The driver node ran out of memory during Spark context initialization
- 2A corrupt or failing init script prevented Spark from starting
- 3A library conflict or incompatible dependency version in the cluster configuration
- 4The Databricks Runtime version has a known startup bug
- 5The driver instance type is too small for the cluster configuration
How to fix it
- 1Open the cluster event log and look at the driver logs around the startup failure time
- 2Check for init script errors — disable all init scripts and retry to isolate the cause
- 3Review installed libraries for version conflicts, especially between Python packages and the DBR version
- 4Try a fresh cluster without any custom libraries or init scripts to confirm the base image starts
- 5Increase the driver node size if memory pressure during startup is indicated in the logs
- 6Try a different Databricks Runtime version to rule out a DBR-specific issue