MetricSign
EN|NLRequest Access
Medium severitydata flow

Power BI Refresh Error:
DF-Executor-SystemInvalidJson

What does this error mean?

The data flow encountered malformed or invalid JSON in the source — a file is corrupted, truncated, incorrectly encoded, or the document form setting (Array of documents vs. Document per line) doesn't match the actual structure.

Common causes

  • 1The JSON source dataset is set to 'Array of documents' but the actual file uses newline-delimited JSON (NDJSON/JSONL) — or vice versa
  • 2A JSON file was partially written by an upstream process and contains a truncated record, unclosed bracket, or missing closing brace
  • 3A source file contains invalid JSON characters — for example, unescaped backslashes, control characters, or byte-order marks (BOM) at the start of the file
  • 4The source file is an empty JSON array `[]` and the data flow does not handle empty-array sources, producing a parse error on the empty structure

How to fix it

  1. 1Check the ADF activity run output for the specific JSON record that failed to parse — the error often includes the offending payload fragment.
  2. 2Open the source dataset and verify the JSON document form is set correctly: 'Array of documents' for JSON arrays, 'Document per line' for newline-delimited JSON.
  3. 3Enable debug mode and preview the source to confirm the files contain well-formed JSON — look for truncated records, missing closing brackets, or encoding issues.
  4. 4If the source produces inconsistent JSON structure, add a filter or Derived Column transformation to sanitize or skip malformed records before downstream transformations.
  5. 5Check whether the source file was partially written by an upstream process — a JSON file that was cut off mid-write will fail to parse.

Frequently asked questions

What is the difference between 'Array of documents' and 'Document per line' in ADF JSON source settings?

'Array of documents' expects a JSON array at the root: `[{...}, {...}]`. 'Document per line' (NDJSON/JSONL) expects each line to be a self-contained JSON object. Using the wrong setting misinterprets the structure and fails.

How do I validate that a JSON file is well-formed before the data flow runs?

Download a sample and validate it in jsonlint.com or VS Code — look for missing brackets, unescaped characters, or BOM markers. Add Get Metadata + If Condition before the data flow to check for zero-byte files.

Can I configure the data flow to skip invalid JSON records?

Not directly — ADF fails the entire file on an invalid record. Workaround: read as plain text, filter invalid lines with a Derived Column and regex, write clean output to staging, then read from there.

Will downstream Power BI datasets be affected?

Yes — the pipeline fails and no data reaches the target. Dependent datasets and reports serve stale figures until the JSON source issue is resolved.

Official documentation: https://learn.microsoft.com/en-us/azure/data-factory/data-flow-troubleshoot-guide

Other data flow errors