Microsoft Fabric Data Pipelines are the orchestration layer within the Fabric platform. They move and transform data within Fabric workspaces — loading data into Lakehouses, triggering Spark notebooks, and running dataflow refreshes. When a Fabric pipeline fails, the downstream semantic models and Direct Lake datasets that depend on its output are immediately affected.
Why Fabric pipeline monitoring is different from ADF monitoring
Fabric Data Pipelines and Azure Data Factory share some common patterns (both are based on the same orchestration engine), but there are key operational differences:
Direct Lake impact: Fabric pipelines often write to Delta tables in a Lakehouse. Power BI Direct Lake datasets read directly from those Delta tables without an import step. When the pipeline produces incorrect or incomplete output, the Direct Lake dataset immediately reflects it — there is no cached import copy to fall back on.
Workspace capacity context: Fabric pipelines run within a Fabric capacity. A pipeline that's slow or failing may be experiencing capacity pressure — the workspace is competing for resources with concurrent Spark jobs, notebook runs, or other pipelines. Capacity metrics are relevant to diagnosing Fabric pipeline failures in a way they are not for ADF.
Monitoring Fabric pipelines via the API
The Fabric REST API provides item run history for Data Pipelines. The API returns run status, start and end times, and error messages for failed runs. MetricSign polls this API to detect failures and create incidents.
Connecting Fabric pipeline failures to Power BI semantic models
By matching Fabric pipeline output paths (Lakehouse table names or ADLS paths) against the data sources configured for Direct Lake semantic models, MetricSign establishes lineage links. When a pipeline fails, the linked semantic models are surfaced in the incident so you know the downstream impact immediately.