7.4 API trigger

Lesson

7.4 API trigger

API triggers enable real-time pipeline execution through HTTP requests, providing immediate data processing capabilities for event-driven architectures. This trigger type is essential for applications requiring instant data transformation, webhook processing, and interactive data analysis workflows.

Create your first API trigger

Setting up an API trigger transforms your pipeline into a web service that can be called programmatically:

Configure API trigger settings

  • Navigate to the "Triggers" tab in your pipeline

  • Select "Add trigger" and choose "API" from available options

  • Assign a meaningful name (e.g., "user_data_processor")

  • The system automatically generates a unique endpoint URL

API endpoint configuration

Once created, your API trigger provides:

  • Unique URL: A dedicated endpoint for pipeline execution

  • Authentication: Configurable security options including API keys or tokens

  • HTTP methods: Support for GET, POST, PUT methods based on use case

  • Request/response format: JSON-based data exchange

Passing data to API triggers

API triggers excel at processing incoming data through various methods:

Query parameters:

curl "<https://your-mage-instance.com/api/pipeline_schedules/trigger_name/pipeline_runs>" \\
  -X POST \\
  -H "Authorization: Bearer your_token" \\
  -d '{"user_id": 12345, "action": "process"}'

Request body data:

*# Access API trigger data within your pipeline*
@data_loader
def load_api_data(**kwargs):
    *# Access request payload*
    request_data = kwargs.get('api_request_payload', {})
    user_id = request_data.get('user_id')
    
    *# Process incoming data*
    return process_user_data(user_id)

Runtime variables: API triggers support dynamic variable injection:

*# Access runtime variables*
runtime_vars = kwargs.get('variables', {})
environment = runtime_vars.get('environment', 'production')

Response handling:

API triggers return structured responses indicating execution status:

Successful execution:

{
  "pipeline_run_id": "abc123",
  "status": "running",
  "created_at": "2024-01-15T10:30:00Z",
  "pipeline_uuid": "pipeline_uuid_here"
}

Error response:

{
  "error": "Pipeline execution failed",
  "message": "Detailed error description",
  "pipeline_run_id": "abc123"
}


Best practices for API triggers

  • Implement proper error handling for robust integration

  • Validate incoming data thoroughly before processing

  • Monitor API usage patterns and implement rate limiting

  • Document API endpoints for team collaboration

  • Test with various payload sizes and formats

API triggers enable real-time data processing capabilities, transforming batch-oriented pipelines into responsive, event-driven data services that integrate seamlessly with modern application architectures.