MetricSign
EN|NLRequest Access
Medium severitydata source

Power BI Refresh Error:
ParquetDataTypeNotMatchColumnType

What does this error mean?

The actual data type in a Parquet column does not match the type declared in the file schema metadata.

Common causes

  • 1The Parquet writer declared a column as INT32 in the schema but wrote INT64 values, or similar type declaration mismatch
  • 2Schema evolution in the Parquet file was not handled consistently — old and new row groups have different column types

How to fix it

  1. 1Read the Parquet file with a Parquet Tools viewer to inspect the declared schema versus the actual stored types.
  2. 2Re-generate the Parquet file with a consistent schema that accurately reflects the stored column types.
  3. 3Use ADF schema drift or a data flow to handle files with mixed type row groups.

Frequently asked questions

Does this error affect all pipeline runs or just the current one?

Depends on the root cause. A persistent misconfiguration fails every run; a transient issue may resolve on retry. Check the run history.

Can this error appear in Azure Data Factory and Microsoft Fabric pipelines?

Yes — the same connector errors appear in both ADF and Fabric Data Factory pipelines.

How do I see the full error detail for an ADF pipeline failure?

In ADF Monitor, click the failed run, then the failed activity. The detail pane shows the error code, message, and sub-error codes.

Will downstream Power BI datasets be affected when an ADF pipeline fails?

Yes — a dataset refreshing after the pipeline will use stale data or fail if the target table was cleared. The Power BI refresh may succeed while serving wrong data.

Official documentation: https://learn.microsoft.com/en-us/azure/data-factory/connector-troubleshoot-parquet

Other data source errors