MetricSign
EN|NLRequest Access
High severitysql

Power BI Refresh Error:
DELTA_CONCURRENT_WRITE_FAILED

What does this error mean?

Two write transactions attempted to modify the same Delta table at the same time and the optimistic concurrency control check determined the conflict could not be resolved safely. One transaction is aborted.

Common causes

  • 1Multiple Databricks jobs or streams are writing to the same Delta table partition simultaneously
  • 2A MERGE or UPDATE operation conflicts with a concurrent INSERT or MERGE from another job
  • 3A streaming write overlaps with a batch write targeting the same data files
  • 4Workflow orchestration runs overlapping job runs that share an output table

How to fix it

  1. 1Retry the failed transaction — Delta's optimistic concurrency means the retry will usually succeed if the conflict was transient.
  2. 2Serialize writes to the same table by removing overlapping job schedules or adding explicit dependencies between jobs.
  3. 3If multiple streams write to the same table, consider partitioning the table so each stream writes to a dedicated partition.
  4. 4Enable Delta's auto-optimize and auto-compaction to reduce the number of concurrent small-file writes.
  5. 5Use the `maxFilesPerTrigger` Structured Streaming setting to reduce write frequency and lower the chance of overlap.

Frequently asked questions

Will retrying the failed job always fix this?

For transient overlaps, yes. If two jobs are permanently scheduled to overlap, retrying just delays the next failure. Serialize the schedules.

Does Delta Lake use pessimistic or optimistic locking?

Optimistic. Delta records the transaction in the _delta_log and checks for conflicts only at commit time. This maximizes throughput but means concurrent conflicting writes cause one transaction to abort.

Other sql errors