MetricSign
EN|NLRequest Access
High severityperformance

Power BI Refresh Error:
RPCTimeoutException

What does this error mean?

The dbt Cloud job exceeded its configured maximum run time and was forcefully terminated by the dbt Cloud scheduler. All models that had not yet completed are marked as cancelled. This is a job-level timeout, distinct from a warehouse query timeout — the entire dbt Cloud run is killed, not just a single model's SQL.

Common causes

  • 1The dbt Cloud job's 'Run Timeout' setting is set too low for the actual run duration of all models in the job
  • 2A slow model blocked the critical path, causing the total job duration to exceed the timeout threshold
  • 3An upstream job was delayed, causing this job to start late and run into its own timeout window
  • 4A large incremental model performing a full merge on a very large table caused the run to stall
  • 5The warehouse was throttled or under resource contention, slowing all queries and pushing the job over its time limit

How to fix it

  1. 1Check the dbt Cloud job settings and increase the 'Run Timeout' value to accommodate the actual p95 run duration plus a buffer.
  2. 2Identify the slow model by reviewing per-model execution times in the dbt Cloud run artifacts — look for models that ran close to or past the timeout threshold.
  3. 3Optimise the slowest model: add clustering keys, reduce the incremental merge predicate scope, or break it into smaller incremental chunks.
  4. 4Consider splitting a single large job into two jobs (transformation and testing) with separate timeout budgets.
  5. 5If upstream delays are the trigger, add a job dependency chain in dbt Cloud so this job only starts after the upstream job completes.

Frequently asked questions

Is RPCTimeoutException the same as a warehouse query timeout?

No — a warehouse query timeout (e.g., Snowflake's STATEMENT_TIMEOUT_IN_SECONDS) kills a single SQL query. RPCTimeoutException kills the entire dbt Cloud job run, cancelling all incomplete models.

Will models that completed before the timeout keep their results?

Yes — models that successfully wrote their results to the warehouse before the timeout are unaffected. Only models that were still running when the job was killed are in an uncertain state.

Where is the Run Timeout configured in dbt Cloud?

In the dbt Cloud UI: Jobs → select the job → Edit → scroll to 'Execution settings' → 'Run Timeout'. Set to 0 to disable the timeout entirely (not recommended for production).

Other performance errors