MetricSign
EN|NLRequest Access
Medium severitydbt

Power BI Refresh Error:
Test Failure

What does this error mean?

One or more dbt tests asserted a data quality condition that was not met. By default, test failures cause the dbt run to return a non-zero exit code, which dbt Cloud marks as a failed job.

Common causes

  • 1A `not_null` test found NULL values in a column expected to always be populated
  • 2A `unique` test found duplicate values in a column or combination of columns expected to be unique
  • 3A `relationships` test found orphaned foreign key values with no matching primary key in the referenced table
  • 4An `accepted_values` test found values outside the allowed set (e.g., a new status code not yet added to the accepted list)
  • 5A custom test returned rows, indicating the assertion was violated
  • 6Upstream data quality issues propagated through the pipeline into a tested model

How to fix it

  1. 1Run `dbt test --select <model_name>` locally to reproduce the failure and see the failing row count.
  2. 2For not_null or unique failures: query the failing column directly — `SELECT <col>, COUNT(*) FROM <model> GROUP BY 1 HAVING COUNT(*) > 1`.
  3. 3Check whether the failure is in source data (before any transformation) by running `dbt test --select source:<source_name>`.
  4. 4If the test is expected to warn rather than fail, change the test `severity` from `error` to `warn` in the schema.yml definition.
  5. 5For relationship tests, check whether the referenced table was loaded correctly in the same run.
  6. 6If the failure is known and acceptable (e.g., historical data exceptions), use `where` clause filtering in the test definition to exclude known-bad rows.

Frequently asked questions

Should dbt test failures block my pipeline?

It depends on severity. Critical tests (not_null on join keys, unique on PKs) should block. Informational tests should use `severity: warn` to surface without stopping the run.

How do I stop a flaky test from blocking my pipeline?

Use `error_if` and `warn_if` thresholds: `error_if: '>100'` means 100 or fewer failures are a warning, not a blocker. This prevents noisy tests from halting runs.

My test failure count is growing over time — what does that signal?

Gradual growth indicates upstream data drift — new source values, changing nullability, or a record type that doesn't meet assumptions. Investigate the source for recent changes rather than silencing the test.

Official documentation: https://docs.getdbt.com/docs/build/data-tests

Other dbt errors