MetricSign
EN|NLRequest Access
Medium severitydata source

Power BI Refresh Error:
DF-Cosmos-FailToResetThroughput

What does this error mean?

The ADF Mapping Data Flow Cosmos DB connector temporarily scaled up container throughput (RU/s) to accelerate the write operation, but failed to reset the throughput back to the original level after the data flow completed. Your Cosmos DB container may now be running at a higher RU/s allocation than intended.

Common causes

  • 1ADF automatically scales Cosmos DB throughput before a large write operation and resets it afterward — if ADF lacks the required permissions to modify throughput, the reset fails
  • 2The Cosmos DB container is provisioned with autoscale throughput rather than manual throughput, which the connector's throughput reset operation does not support
  • 3A transient API error from Cosmos DB prevented the throughput reset from completing, leaving the container at the elevated RU/s setting

How to fix it

  1. 1Immediately check the Cosmos DB container's current throughput in the Azure portal — if it was automatically scaled up, manually reset it to your standard level to avoid unexpected cost.
  2. 2Verify the linked service identity has the 'Cosmos DB Built-in Data Contributor' or 'Cosmos DB Operator' role, which includes permission to modify throughput settings.
  3. 3If the container uses autoscale throughput, disable the ADF automatic throughput scaling option in the Cosmos DB sink settings — ADF should not attempt to override autoscale.
  4. 4Re-run the data flow to confirm whether the pipeline completes and throughput is reset correctly after the fix.

Frequently asked questions

Did my data get written even though the throughput reset failed?

Usually yes — the data write completed before the throughput reset failed. The error is in the post-write cleanup step. Check the target container to confirm the expected data is present.

How do I disable ADF's automatic throughput scaling for Cosmos DB?

In the Cosmos DB sink settings, find the 'Throughput' property and set it to leave throughput unchanged rather than letting ADF manage it automatically.

Does this error occur with every Cosmos DB write, or only large ones?

ADF only triggers throughput scaling for writes that exceed the current provisioned RU/s. Small writes within the provisioned limit won't trigger scaling and won't encounter this error.

Official documentation: https://learn.microsoft.com/en-us/azure/data-factory/data-flow-troubleshoot-guide

Other data source errors