MetricSign
EN|NLRequest Access
Medium severitydata source

Power BI Refresh Error:
SystemErrorSynapseSparkJobFailed

What does this error mean?

A Synapse Analytics Spark job invoked by ADF failed with a system-level error. The Spark cluster or job encountered an unrecoverable failure.

Common causes

  • 1The Synapse Spark pool does not have sufficient resources to start or run the job
  • 2A transient infrastructure failure caused the Spark cluster to terminate unexpectedly
  • 3The Spark job code contains a runtime exception (NullPointerException, OutOfMemory, etc.)

How to fix it

  1. 1Check the Synapse Spark job logs in the Synapse Analytics workspace for the specific error.
  2. 2Verify the Spark pool has sufficient compute resources for the job.
  3. 3Retry the pipeline — transient Spark cluster errors often resolve on the next attempt.
  4. 4Review the Synapse workspace diagnostics logs for infrastructure-level failures.

Frequently asked questions

Does this error affect all pipeline runs or just the current one?

Depends on the root cause. A persistent misconfiguration fails every run; a transient issue may resolve on retry. Check the run history.

Can this error appear in Azure Data Factory and Microsoft Fabric pipelines?

Yes — the same connector errors appear in both ADF and Fabric Data Factory pipelines.

How do I see the full error detail for an ADF pipeline failure?

In ADF Monitor, click the failed run, then the failed activity. The detail pane shows the error code, message, and sub-error codes.

Will downstream Power BI datasets be affected when an ADF pipeline fails?

Yes — a dataset refreshing after the pipeline will use stale data or fail if the target table was cleared. The Power BI refresh may succeed while serving wrong data.

Official documentation: https://learn.microsoft.com/en-us/azure/data-factory/copy-activity-fault-tolerance

Other data source errors