Lesson
7.5 Trigger multiple pipelines
Orchestrating multiple pipelines is a common requirement in production data environments where complex workflows span multiple interdependent processes. Mage provides several approaches for coordinating pipeline execution across different triggers and schedules.
Orchestration blocks in Mage Pro
What are orchestration blocks: Orchestration blocks are specialized components in Mage Pro that enable coordination and management of multiple pipeline executions. These blocks provide declarative control over complex workflows, allowing you to define dependencies, execution order, and conditional logic for multi-pipeline scenarios.
Basic orchestration block structure:
Key orchestration parameters:
pipeline_uuid: The unique identifier of the target pipeline to trigger
variables: Runtime variables passed to the triggered pipeline
check_status: Enable polling to monitor the triggered pipeline's execution status
error_on_failure: Control whether upstream pipeline failures should stop the orchestration
poll_interval: Frequency of status checks when monitoring enabled
poll_timeout: Maximum time to wait for pipeline completion before timing out
verbose: Enable detailed logging of triggered pipeline execution
Pipeline dependencies and sequencing
Sequential pipeline execution: When pipelines must execute in a specific order, you can coordinate them through several methods:
Method 1: Staggered schedule triggers Configure schedule triggers with time offsets to ensure proper sequencing:
Method 2: API trigger chaining
Use callback blocks or data exporters to trigger downstream pipelines via API calls:
Method 3: sensor-based coordination Use sensor blocks to monitor completion status and trigger dependent pipelines:
Common orchestration patterns
Sequential pipeline execution: When pipelines must execute in a specific order, orchestration blocks can coordinate them through dependency management:
Fan-out pattern: One upstream pipeline triggers multiple downstream pipelines in parallel:
Fan-In pattern:
Multiple upstream pipelines feed into a single downstream pipeline:
Diamond Pattern:
Combines fan-out and fan-in for complex data workflows:
Best practices for multi-pipeline orchestration
Error Handling and Recovery:
Implement proper error handling across orchestrated pipelines
Use appropriate timeout settings for dependent pipelines
Consider retry mechanisms for transient failures
Monitor pipeline dependencies through logging and alerts
Resource Management:
Stagger execution times to avoid resource conflicts
Monitor system resources during peak orchestration periods
Use appropriate timeout settings for each pipeline stage
Consider using different compute resources for resource-intensive pipelines
Monitoring and Observability:
Implement comprehensive logging across all orchestrated pipelines
Set up alerts for pipeline failures that could break downstream dependencies
Track end-to-end execution times for the complete workflow
Maintain visibility into data lineage across pipeline boundaries
Multi-pipeline orchestration enables complex data workflows while maintaining modularity and reusability of individual pipeline components.