Three Generations, Running Simultaneously
If your organization started building data pipelines more than five years ago, there's a good chance you're running three generations of ETL tooling at the same time: SSIS packages from the SQL Server era, ADF pipelines built during the Azure migration, and increasingly, Fabric Data Pipelines as Microsoft pushes the platform forward.
Each generation has different monitoring capabilities. SSIS logs to SQL Server Agent history — readable but isolated from cloud tooling. ADF has Azure Monitor integration, activity run metrics, and native alerting through the Azure portal. Fabric Pipelines have their own monitoring interface, separate from ADF's, even though they share some underlying infrastructure.
None of these tools gives you a cross-generation view. The question "is my data stack healthy right now?" requires checking three separate monitoring consoles, correlating timestamps manually, and knowing which version of each pipeline is the authoritative one. For teams that have been adding tooling incrementally, this is the reality — and it creates real gaps in oversight.
SSIS: What's Still Running and Why
Many organizations underestimate how much SSIS is still in production. The packages work, they've been running for years without issues, and migrating them to ADF requires effort that never makes it to the top of the priority list. The result: production data pipelines that no one is actively monitoring in any cloud-integrated way.
The problem isn't that SSIS is bad — it's that SSIS monitoring doesn't integrate with your cloud monitoring stack. When an SSIS package fails, the error goes to SQL Server Agent job history. Your ADF monitoring console doesn't know about it. Your Power BI incident alerts don't know about it.
For packages that feed Power BI datasets — either directly via an on-premises SQL Server or through a gateway connection — a silent SSIS failure is a Power BI problem that won't appear in Power BI Service's refresh history. The refresh might succeed by loading stale data from the table that SSIS was supposed to update.
A monitoring bridge that works without requiring full migration: push SSIS job status to a central monitoring table after each run. A simple SSIS package with a final task that writes success/failure, row count, and duration to a monitoring database, queryable by your cloud monitoring tool. This bridges the gap without touching the package logic or requiring migration.
ADF: The Middle Ground
ADF is mature, well-documented, and has good native monitoring. The challenge during a three-generation transition is that ADF is simultaneously doing two things: replacing SSIS for on-premises-connected workloads and being gradually replaced by Fabric Pipelines for cloud-native orchestration.
This means some ADF pipelines are actively developed and enhanced, while others are in maintenance mode awaiting migration to Fabric. The monitoring posture for these two groups should differ.
For actively-developed ADF pipelines: rich monitoring with run-level alerting, row count validation, performance baselines, and failure notifications. These pipelines are changing; monitoring changes need to stay current with them.
For maintenance-mode ADF pipelines: basic health monitoring — did it run, did it succeed, how long did it take? No need for sophisticated observability on pipelines you're planning to retire in Q3.
The risk in practice is treating all ADF pipelines the same. Setting up detailed monitoring for a temporary migration pipeline that will be retired in two months wastes effort. Applying maintenance-mode monitoring to a critical production pipeline because "it hasn't caused problems" is a reliability risk. Classify pipelines by criticality and migration status before setting monitoring depth.
Fabric Pipelines: Monitoring on a Moving Platform
Fabric Data Pipelines are Microsoft's next-generation orchestration layer. As of early 2026, they're well-established for lakehouse workloads but still have feature gaps compared to ADF — particularly for on-premises source connectivity (which still requires gateway clusters) and for complex conditional retry logic.
From a monitoring perspective, Fabric Pipelines add a layer of complexity: they run in Fabric workspaces that have their own capacity management, separate from ADF and separate from Power BI Premium. A Fabric pipeline that's running slowly might be hitting workspace capacity limits, not exhibiting pipeline logic errors.
Fabric's native monitoring provides run history and basic metrics through the Fabric workspace monitoring hub, but it doesn't integrate with ADF monitoring or with Power BI dataset refresh monitoring. For organizations that have Fabric Pipelines loading Delta tables that feed Power BI datasets via Direct Lake, the failure chain is direct: Fabric pipeline → Delta table → Power BI model (Direct Lake). A Fabric pipeline failure affects the model immediately.
This is different from import-mode Power BI datasets, where a failed pipeline means the next scheduled refresh loads stale data. With Direct Lake, there's no staging buffer — what's in the Delta table is what's in the report, in near-real-time. A half-loaded Delta table caused by a Fabric pipeline failure shows up immediately in every report that uses it.
One View Across Three Generations
The goal isn't to force every pipeline into a single tool — SSIS, ADF, and Fabric each have legitimately different use cases and will continue to coexist for years. The goal is a single monitoring view that shows, across all three tooling generations: what ran, what succeeded, what failed, and what's downstream.
This requires an abstraction that normalizes tool-specific details. A "pipeline run" is a pipeline run regardless of which tool executed it. It has a name, a start time, a duration, a status, a row count, and a list of outputs. With this abstraction in place, monitoring logic becomes generic: alert when any pipeline run fails, alert when row count drops below threshold, alert when duration exceeds baseline.
Maintaining this abstraction requires a collector for each tool: reading from SQL Server Agent history for SSIS, subscribing to Azure Event Grid for ADF, polling the Fabric monitoring API for Fabric Pipelines. It's infrastructure work — but the operational payoff is significant. One incident feed for your entire data stack means one on-call process, one set of escalation paths, and one place to look when something breaks.
For teams mid-migration, this unified view also answers a strategic question: which pipelines are still on SSIS that should be migrated, and how critical are they? Visibility into job health, frequency, and downstream impact makes prioritization easier.