Databricks - Run Job
In this article
Overview
You can use this Snap to automate the execution of a set of tasks or processes within a Databricks workspace. It triggers the task and periodically checks its progress. The Snap stops after the job is complete, but if you cancel the pipeline before the task finishes, the Snap requests to terminate the task.
Example
Run Job on a Cluster
The following example pipeline demonstrates how to run a job specified in the notebook on a cluster.Snap Type
The Databricks - Run Job Snap is a Write-type Snap.
Prerequisites
Valid client ID.
A valid account with the required permissions.
Support for Ultra Pipelines
Works in Ultra Pipelines.Â
Limitations and Known Issues
None.
Snap Views
Type | Format | Number of Views | Examples of Upstream and Downstream Snaps | Description |
---|---|---|---|---|
Input | Document  |
|
| Requires a valid task name, notebook path, and cluster-info. |
Output | Document  |
|
| Executes the selected notebook. |
Error | Error handling is a generic way to handle errors without losing data or failing the Snap execution. You can handle the errors that the Snap might encounter when running the pipeline by choosing one of the following options from the When errors occur list under the Views tab:
Learn more about Error handling in Pipelines. |
Snap Settings
Asterisk ( * ): Indicates a mandatory field.
Suggestion icon (): Indicates a list that is dynamically populated based on the configuration.
Expression icon ( ): Indicates the value is an expression (if enabled) or a static value (if disabled). Learn more about Using Expressions in SnapLogic.
Add icon ( ): Indicates that you can add fields in the field set.
Remove icon ( ): Indicates that you can remove fields from the field set.
Upload icon ( ): Indicates that you can upload files.
Field Name | Field Type | Description | |
---|---|---|---|
Label* Â Default Value:Â Databricks - Run Job | String | Specify the name for the Snap. You can modify this to be more specific, especially if you have more than one of the same Snap in your pipeline. Â | |
Task name* Â Default Value:Â N/A | String/Expression | Specify the name of the task to perform the job. | |
Notebook path* Â Default Value:Â N/A | String/Expression/Suggestion | Specify the path of the saved notebook that will run in this job. Notebook is a web-based interface that allows you to create, edit, and execute data science and data engineering workflows. Learn more about Databricks notebooks. | |
Cluster* Â Default Value:Â N/A | String/Expression/Suggest | Specify the cluster to run the job within its environment. | |
Parameter(s) | Use this field set to specify the parameters to run the job. | ||
Key* Default Value:Â N/A | String/Expression | Specify the parameter key. | |
Value* Default Value:Â N/A | String/Expression | Specify the parameter value. | |
Interval check (seconds)* Â Default Value:Â 10 | Integer/Expression | Specify the number of seconds to wait before checking the status of the task. | |
Snap Execution Default Value:Â Execute only | Dropdown list | Select one of the following three modes in which the Snap executes:
|
Example
Run Job on a Cluster
The following example pipeline demonstrates how to run a job specified in the notebook on a cluster.
Step 1: Configure the Databricks - Run Job Snap with the following settings:
a. Task name: Specify the task the Databricks - Run Job Snap must perform in this field.
b. Notebook path: Specify the path to the Databricks notebook that contains the code to be executed. This path indicates the location within the Databricks environment where the notebook is stored.
c. Cluster: Specify the cluster on which the job must be executed. The cluster configuration (including computational resources) is predefined and identified by this name and ID.
d. Interval check (seconds): Specify the frequency (in seconds) at which the Snap will check the status of the running job. In this case, it will check every 10 seconds.
Databricks - Run Job Configuration | Databricks - Run Job Output |
---|---|
 |
|
Step 2: Configure the Mapper Snap to store the result status of the Databricks - Run Job Snap. On validation, the Mapper Snap displays the job success message.
Downloads
Snap Pack History
Â
Related Content
Have feedback? Email documentation@snaplogic.com | Ask a question in the SnapLogic Community
© 2017-2024 SnapLogic, Inc.