MetricSign
EN|NLRequest Access
Medium severitydata quality

Power BI Refresh Error:
UNSUPPORTED_DATA_TYPE

What does this error mean?

An operation or connector encountered a data type that it does not support. This error typically occurs when writing to an external system that lacks an equivalent type for a Spark SQL type, or when using a Spark operation that does not accept a specific complex type.

Common causes

  • 1Writing a STRUCT or MAP column to a JDBC sink (e.g. MySQL, SQL Server) that does not support complex types
  • 2A Databricks JDBC source returns a database-specific type with no Spark equivalent
  • 3Using INTERVAL or VOID types in an operation that requires a concrete primitive type
  • 4A Delta Lake table was created with a type supported in a newer Databricks Runtime not available in the current cluster
  • 5A connector (e.g. Databricks ODBC driver) does not support a newly introduced Spark type

How to fix it

  1. 1Cast the unsupported column to STRING or JSON string before writing: to_json(struct_col) AS col_json.
  2. 2Use explode or inline to flatten ARRAY or STRUCT columns before JDBC writes.
  3. 3Check the target system's supported type list and map each Spark column type to a compatible target type.
  4. 4For JDBC connectors, specify a custom type mapping with the customSchema option.
  5. 5Upgrade the Databricks Runtime if the type was added in a newer version and you need native support.

Frequently asked questions

Does this error occur in Delta-to-Delta writes?

Rarely. Delta Lake supports all Spark SQL types. UNSUPPORTED_DATA_TYPE most commonly appears when writing from Delta to external systems (JDBC, ODBC, CSV) that have narrower type support.

What is the best way to handle STRUCT columns in a Postgres write?

Serialize the struct column to a JSON string with to_json(struct_col) and map it to a TEXT or JSONB column in PostgreSQL.

Other data quality errors