MetricSign
EN|NLRequest Access
Medium severityconnection

Power BI Refresh Error:
CHUNK_DOWNLOAD_FAILED

What does this error mean?

The Snowflake Python connector failed to download one or more result chunks from cloud storage (S3, Azure Blob, or GCS) after query execution, causing the result fetch to fail.

Common causes

  • 1A pre-signed URL for the result chunk expired before the download completed (large result sets with slow networks)
  • 2Network interruption between the client machine and cloud storage during multi-chunk result retrieval
  • 3Proxy or firewall settings block direct cloud storage access from the connector
  • 4The client_prefetch_threads setting is too high, causing resource contention

How to fix it

  1. 1Step 1: Retry the query — transient network errors are the most common cause and usually resolve on retry.
  2. 2Step 2: Reduce the result set size using pagination (LIMIT / OFFSET or result scan) to avoid large multi-chunk downloads.
  3. 3Step 3: Set client_prefetch_threads=1 in the connection to disable parallel chunk prefetching, which can help on constrained networks.
  4. 4Step 4: Check proxy settings — if the client routes through a proxy, ensure the proxy allows connections to *.amazonaws.com, *.blob.core.windows.net, or *.storage.googleapis.com.
  5. 5Step 5: Upgrade the Snowflake Python connector to the latest version, as chunk download reliability improves regularly.

Frequently asked questions

Can I avoid chunk downloads entirely for large queries?

Use COPY INTO to export large results to a stage and read from there, which bypasses the in-memory chunk mechanism. For programmatic access, use the arrow_with_config fetch type for streaming results.

Does the Snowflake Python connector support automatic retry on chunk failures?

The connector retries individual HTTP requests internally, but a full chunk download failure raises an exception to the caller. Wrap your fetch logic in a retry decorator to handle transient failures gracefully.

Other connection errors