MetricSign
EN|NLRequest Access
High severitydata source

Power BI Refresh Error:
ParquetDataCountNotMatchColumnCount

What does this error mean?

The number of values in a Parquet row does not match the number of columns in the file schema — the file is structurally inconsistent.

Common causes

  • 1The Parquet file was corrupted during write — a partial row was written before the process was interrupted
  • 2The Parquet writer has a bug that sometimes generates row groups with mismatched column counts
  • 3The file was modified after writing, breaking the internal column count consistency

How to fix it

  1. 1Validate the Parquet file using Parquet Tools or DuckDB to identify the corrupted row group.
  2. 2Re-generate the Parquet file from the source system.
  3. 3If only some row groups are corrupted, consider using a Spark notebook to read only the valid row groups and write a clean Parquet file.

Frequently asked questions

Does this error affect all pipeline runs or just the current one?

Depends on the root cause. A persistent misconfiguration fails every run; a transient issue may resolve on retry. Check the run history.

Can this error appear in Azure Data Factory and Microsoft Fabric pipelines?

Yes — the same connector errors appear in both ADF and Fabric Data Factory pipelines.

How do I see the full error detail for an ADF pipeline failure?

In ADF Monitor, click the failed run, then the failed activity. The detail pane shows the error code, message, and sub-error codes.

Will downstream Power BI datasets be affected when an ADF pipeline fails?

Yes — a dataset refreshing after the pipeline will use stale data or fail if the target table was cleared. The Power BI refresh may succeed while serving wrong data.

Official documentation: https://learn.microsoft.com/en-us/azure/data-factory/connector-troubleshoot-parquet

Other data source errors