7.2 Run settings

Lesson

7.2 Run settings

Run settings provide comprehensive control over how your triggers execute pipelines, offering different configuration options for schedule-based and API-based triggers. These settings ensure reliable pipeline execution while providing flexibility for various operational requirements.

Schedule-based trigger run settings

For schedule triggers, run settings focus on time-based execution management and operational reliability:

Timeout configuration: Set maximum execution time (in seconds) before pipeline termination. This prevents runaway processes and ensures resource management:

  • Short pipelines: 300-900 seconds (5-15 minutes)

  • Medium pipelines: 1800-3600 seconds (30-60 minutes)

  • Long-running ETL: 7200+ seconds (2+ hours)

Status for timed out runs: Define how the system handles pipeline timeouts:

  • Failed (default): Mark timed-out runs as failed for alerting

  • Cancelled: Mark as cancelled for softer handling

Advanced execution controls:

SLA settings: Configure Service Level Agreement thresholds for pipeline execution monitoring and alerting when performance degrades.

Keep running pipeline vven if blocks fail: Enable this option when you want downstream blocks to continue executing even if upstream blocks encounter errors. Useful for data quality pipelines where partial success is acceptable.

Skip run if previous run still in progress: Prevents overlapping executions by skipping new runs when previous executions haven't completed. Essential for resource-intensive pipelines or when data consistency requires sequential processing.

Create initial pipeline run if start date is before current execution period: Enables backfill functionality for schedule triggers, automatically creating runs for missed execution periods when the trigger starts.

Runtime variables for schedule triggers: Schedule triggers support dynamic variable injection for flexible pipeline execution:

*# Access scheduled execution context*
execution_date = kwargs.get('execution_date')
ds = kwargs.get('ds')  *# Date string in YYYY-MM-DD format# Access custom runtime variables*
environment = kwargs.get('variables', {}).get('environment', 'production')
batch_size = kwargs.get('variables', {}).get('batch_size', 1000)

API-based trigger run settings

API triggers provide flexible methods for passing dynamic data and configuration parameters to your pipeline during execution. Unlike schedule triggers that rely on predefined variables, API triggers can receive real-time data through HTTP requests, making them ideal for processing user inputs, webhook payloads, and event-driven data. Mage supports multiple approaches for sending variables to API triggers, allowing you to choose the method that best fits your integration requirements and payload structure. Whether you're processing simple key-value pairs or complex nested data, understanding these variable passing methods is essential for building responsive, data-driven pipelines.

Runtime variables through multiple methods:

Method 1: JSON payload variables Pass variables through the request body structure:

{
  "pipeline_run": {
    "variables": {
      "key1": "value1",
      "key2": "value2"
    }
  }
}

Method 2: root-level payload (simplified) Use the ?_use_root_keys=true URL parameter to simplify payload structure:

{
  "key1": "value1",
  "key2": "value2"
}

Method 3: URL parameters Include variables directly in the URL query string:

?key3=value3&key4=value4&tag[]=pro&tag[]=power

Method 4: array-based variables Send multiple variable sets for batch processing:

[
  {
    "key1": "value1"
  },
  {
    "key1": "value12",
    "key2": "value2"
  },
  {
    "key3": "value3"
  },
  {
    "key4": "value4",
    "key5": "value45"
  },
  {
    "key3": "value35"
  }
]

Sample cURL commands:

*# Basic API trigger with JSON payload*
curl -X POST <https://cluster.mage.ai/api/pipeline_schedules/137/pipeline_runs> \\
  --header 'Content-Type: application/json' \\
  --data '{
    "pipeline_run": {
      "variables": {
        "key1": "value1",
        "key2": "value2"
      }
    }
  }'

API trigger best practices:

  • Validate input data before processing to prevent errors

  • Use appropriate timeout values based on expected processing time

  • Implement proper error handling for robust API integration

  • Monitor API usage patterns and implement rate limiting if needed

  • Use asynchronous execution for long-running processes

  • Test with various payload formats during development

Run settings provide the operational foundation for reliable trigger execution, ensuring your pipelines perform consistently across different execution contexts and operational requirements.