Once a Pipeline has been configured in Nexadata, it can be executed to produce the desired output results. This guide walks you through the steps to manually execute a pipeline, including key details about dataset selection, column mapping, and troubleshooting during execution.
Step-by-Step Instructions
Step 1: Manually Executing a Pipeline
To execute a pipeline manually:
Step 2: Select a Dataset
After clicking Execute, you will be prompted to select a dataset. You have two options:
Use Default Dataset: Displays the default dataset, allowing you to proceed directly with execution.
Choose Dataset: Opens a dropdown menu to select a dataset from the existing configured datasets.
Step 3: Map Inbound Columns
If you choose a custom dataset:
Map the inbound columns that the pipeline expects with the columns from the selected dataset.
Ensure that the mapping is accurate to allow the transformation logic to work as expected.
π Proper mapping is a critical step in the process because it allows for the flexibility to reuse pipelines across different datasets.
Step 4: Monitor the Execution Process
After the Start Pipeline is initiated:
Each step of the pipeline is displayed.
If any issues occur:
Transformation errors or warnings will be displayed.
Logs for issues in mapping groups will be available for download.
Future Enhancements to Pipeline Execution
Nexadata is actively expanding its pipeline execution capabilities:
Pipeline Scheduler: Coming early in 2025, pipelines can be scheduled for automatic execution.
REST API Execution: Currently supported for remote pipeline execution.
End-to-End Integration: Upcoming support for direct reading and writing to supported data connections.
Summary
Pipeline execution in Nexadata provides flexibility and robust functionality to manage and transform data efficiently. With options for dataset customization, detailed monitoring, and upcoming enhancements like scheduling and direct integration, Nexadata continues to evolve its data pipeline capabilities.