MetricSign
EN|NLRequest Access
Medium severitydata source

Power BI Refresh Error:
DF-SQLDW-ErrorRowsFound

What does this error mean?

One or more rows failed during write to Azure Synapse Analytics because they violate a constraint (NOT NULL, unique key, data type overflow) in the Synapse table.

Common causes

  • 1A source row contains a null value in a column that is defined as NOT NULL in the Synapse table
  • 2A string value exceeds the column's VARCHAR or NVARCHAR length limit in Synapse
  • 3A numeric value is out of range for the Synapse column's precision and scale
  • 4A source column type cannot be cast to the Synapse column type during bulk insert

How to fix it

  1. 1In ADF Studio, open the data flow and click the Synapse sink transformation.
  2. 2Go to the Settings tab and find the Error row handling section — set it to 'Skip incompatible rows' or 'Redirect incompatible rows to file' to prevent one bad row from failing the entire run.
  3. 3For 'Redirect', configure a linked service pointing to an error output path (Blob or ADLS Gen2) to capture the rejected rows for inspection.
  4. 4After configuring error row handling, run the pipeline and then inspect the redirected rows to identify the root cause.
  5. 5Fix the data quality issues upstream (add null handling, string truncation, or type casting in a derived column transformation) to prevent rows from being rejected.
  6. 6Once the upstream data is clean, remove the redirect handling if desired.

Frequently asked questions

What is the difference between Skip and Redirect for error row handling?

Skip discards incompatible rows silently. Redirect writes them to a separate error output path. Redirect is preferable for production pipelines because it makes bad rows visible for investigation.

Where do I configure Error row handling in ADF?

In the data flow, click the Synapse sink, go to Settings, set 'Error row handling' to 'Redirect incompatible rows', then configure the error row linked service and path.

Can this error happen even with correct data if the Synapse table schema changed?

Yes — if the Synapse table was altered (column made NOT NULL, width reduced) after the pipeline was built, previously-valid rows may now fail. Validate the pipeline after any Synapse schema changes.

Will downstream Power BI datasets be affected?

If the pipeline fails entirely, the Synapse table receives no new data and connected datasets show stale data. If error row handling allows the pipeline to succeed despite bad rows, the dataset is updated but the redirected rows are absent from the output.

Official documentation: https://learn.microsoft.com/en-us/azure/data-factory/data-flow-troubleshoot-guide

Other data source errors