MetricSign
EN|NLRequest Access
Medium severitydata flow

Power BI Refresh Error:
DF-Executor-DriverError

What does this error mean?

The Spark driver process crashed or threw an unhandled exception during data flow execution. The driver coordinates all executor tasks — when it fails, the entire data flow job is terminated.

Common causes

  • 1The Spark driver ran out of memory (driver OOM) while collecting results or coordinating a large shuffle — the driver has a smaller memory allocation than executors
  • 2A transformation collected all data to the driver node (e.g., a collect() or orderBy() without partitioning) and the data volume exceeded driver memory
  • 3A transient Spark cluster issue caused the driver process to crash at startup or during a checkpoint
  • 4An unhandled exception in the Spark job plan (e.g., a null pointer in the query plan) caused the driver to abort the job

How to fix it

  1. 1Review the full ADF activity run output — the driver error message contains the underlying Spark driver exception; look for the root cause in the exception chain.
  2. 2In ADF Monitor, click the failed activity and select 'View details' to access the Spark driver and executor logs.
  3. 3If the driver crashed due to memory pressure, increase the Azure IR compute type to provide more driver memory.
  4. 4Retry the pipeline after a few minutes — transient Spark driver crashes caused by cluster startup issues often resolve on retry.
  5. 5If the error is persistent, enable debug mode with a smaller data sample to reproduce the crash and identify the specific transformation causing the driver failure.

Frequently asked questions

How do I see the specific exception that crashed the Spark driver?

In ADF Monitor, click the failed activity and select 'View details' for the Spark driver logs. Look for 'OutOfMemoryError' or 'Exception in thread main' — these contain the specific exception and stack trace.

What is the difference between a driver error and an executor error?

The Spark driver coordinates; executors are workers. A driver error aborts the entire job. An executor error is more targeted — Spark can sometimes recover by retrying on another executor.

Does retrying the pipeline help with driver errors?

Often yes — if the crash was transient, a retry succeeds. If it is a consistent driver OOM, retrying fails the same way. Retry once; if it fails at the same stage again, investigate driver memory.

Will downstream Power BI datasets be affected?

Yes — when the driver fails, the entire data flow job is aborted and no data is written to the sink. Dependent datasets serve stale figures.

Official documentation: https://learn.microsoft.com/en-us/azure/data-factory/data-flow-troubleshoot-guide

Other data flow errors