Skip to main content
All CollectionsExecuting Pipelines
Executing a Pipeline in Nexadata
Executing a Pipeline in Nexadata

Learn how to execute pipelines in Nexadata, select datasets, map columns, troubleshoot issues, and explore upcoming enhancements.

Quin Eddy avatar
Written by Quin Eddy
Updated over a month ago

Once a Pipeline has been configured in Nexadata, it can be executed to produce the desired output results. This guide walks you through the steps to manually execute a pipeline, including key details about dataset selection, column mapping, and troubleshooting during execution.

Step-by-Step Instructions

Step 1: Manually Executing a Pipeline

To execute a pipeline manually:

  1. Open the Pipeline Builder.

  2. Click the Execute button to begin the execution process.

Step 2: Select a Dataset

After clicking Execute, you will be prompted to select a dataset. You have two options:

  • Use Default Dataset: Displays the default dataset, allowing you to proceed directly with execution.

  • Choose Dataset: Opens a dropdown menu to select a dataset from the existing configured datasets.

Step 3: Map Inbound Columns

If you choose a custom dataset:

  1. Map the inbound columns that the pipeline expects with the columns from the selected dataset.

  2. Ensure that the mapping is accurate to allow the transformation logic to work as expected.

πŸ‘ Proper mapping is a critical step in the process because it allows for the flexibility to reuse pipelines across different datasets.

Step 4: Monitor the Execution Process

After the Start Pipeline is initiated:

  • Each step of the pipeline is displayed.

  • If any issues occur:

    • Transformation errors or warnings will be displayed.

    • Logs for issues in mapping groups will be available for download.

Future Enhancements to Pipeline Execution

Nexadata is actively expanding its pipeline execution capabilities:

  1. Pipeline Scheduler: Coming early in 2025, pipelines can be scheduled for automatic execution.

  2. REST API Execution: Currently supported for remote pipeline execution.

  3. End-to-End Integration: Upcoming support for direct reading and writing to supported data connections.


Summary

Pipeline execution in Nexadata provides flexibility and robust functionality to manage and transform data efficiently. With options for dataset customization, detailed monitoring, and upcoming enhancements like scheduling and direct integration, Nexadata continues to evolve its data pipeline capabilities.

Did this answer your question?