MetricSign
EN|NLRequest Access
Medium severitydata flow

Power BI Refresh Error:
DF-Executor-SourceInvalidPayload

What does this error mean?

The data flow source received a payload that does not match the configured format or schema.

Common causes

  • 1The source dataset is configured as one format (e.g., Parquet) but the actual files are in a different format (e.g., CSV or JSON), causing the parser to read an invalid binary payload
  • 2A source file was partially written by an upstream process that has not yet finished — the data flow reads a truncated file and fails to parse the incomplete payload
  • 3A REST or OData source returned an error response (HTML error page or non-JSON body) instead of the expected data payload
  • 4A Parquet or Avro file is corrupted, zero-byte, or was overwritten mid-run by a concurrent process

How to fix it

  1. 1Check the ADF activity run output for the specific row or record that triggered the invalid payload error.
  2. 2Open the source dataset and verify the file format settings match the actual format of the source data — a CSV configured as Parquet will produce invalid payload errors.
  3. 3Enable debug mode and preview the source data to see if any files are malformed, truncated, or contain encoding issues.
  4. 4If reading from a REST or OData source, test the API endpoint directly (curl or Postman) to confirm it is returning well-formed responses.
  5. 5Check whether any source files are zero-byte or partially written by an upstream process that has not yet completed.

Frequently asked questions

How do I check whether a source file is the wrong format?

Open the source dataset and check format settings. Open the file in Azure Storage Explorer — if it can't be opened with the expected reader, the format is wrong. For CSV, download a sample and verify delimiter, encoding, and quoting.

Why does the pipeline succeed in debug mode but fail in a trigger run?

Debug mode lets you override the path, so you may be pointing at a valid sample. The trigger run reads the full production path, which may contain corrupted or partially-written files. Check for zero-byte or recently-modified files in the production folder.

Can I configure the data flow to skip malformed records instead of failing?

For CSV, enable 'Skip lines with errors' in the source dataset. Parquet and Avro have no skip-on-error — fix or remove malformed files first. A Get Metadata + If Condition pre-check prevents the data flow from starting on bad input.

Will downstream Power BI datasets be affected?

Yes — the pipeline fails and no data reaches the target. Dependent datasets and reports will serve stale data until the source payload issue is resolved.

Official documentation: https://learn.microsoft.com/en-us/azure/data-factory/data-flow-troubleshoot-guide

Other data flow errors