On this page
Use this account type to connect to Databricks clusters.
eXtreme Pipelines on Azure Databricks with the JAR Submit Snap fail with the job canceled as the SparkContext shuts down.
- When ELT and Spark SQL 2.x Snap account credentials—such as user names, passwords, client secrets, auth codes and tokens, secret keys, and keystores—are auto-filled using the Google Chrome browser, the accounts and hence the Pipelines fail. This is because the browser overwrites the field values with its own encrypted values that the SnapLogic Platform cannot read. SnapLogic recommends that you do not auto-save your Snap account credentials in the Chrome browser.
- Ensure that you delete any credentials that the browser has already saved for elastic.snaplogic.com, and then perform ONE of the following actions:
- Option 1: Click that appears in the address bar after you submit your login credentials at elastic.snaplogic.com, and then click Never.
- Option 2: Disable the Offer to save Passwords option at chrome://settings/passwords while working with your SnapLogic Pipelines. If you disable this option, your Chrome browser will not remember your passwords on any other website.
|Parameter Name||Data Type||Description||Default Value||Example |
|Label||String||The name for the account. We recommend you to update the account name if there is more of the same account type in your project.||N/A||TestAWSAccount|
Token generated in Azure Databricks. See Azure Databricks Authentication for details on how to generate a token.
|Azure Databricks URL||String||URL the region where you want to launch the cluster. ||N/A||https://eastus.azuredatabricks.net/|
Snap Pack History
Click to view/expand
- Accounts support validation. Thus, you can click Validate in the account settings dialog to validate that your account is configured correctly.
4.21 Patch 421patches5928
- Adds Hierarchical Data Format v5 (HDF5) support in AWS EMR. With this enhancement, you can read HDF5 files and parse them into JSON files for further data processing. See Enabling HDF5 Support for details.
- Adds support for Python virtual environment to the PySpark Script Snap to enable reading HDF5 files in the S3 bucket. You can specify the path for the virtual machine environment's ZIP file in this field.
4.21 Patch 421patches5851
- Optimizes Spark engine execution on AWS EMR, requiring lesser compute resources.
- Introduced a new account type, Azure Databricks Account. This enhancement makes account configuration mandatory for the PySpark Script and JAR Submit Snaps.
- Enhanced the PySpark Script Snap to display the Pipeline Execution Statistics after a Pipeline with the Snap executes successfully.
4.17 Patch ALL7402
- Pushed automatic rebuild of the latest version of each Snap Pack to SnapLogic UAT and Elastic servers.
- No updates made. Automatic rebuild with a platform release.
- New Snap Pack. Execute Java Spark and PySpark applications through the SnapLogic platform. Snaps in this Snap Pack are:
- JAR Submit: Upload your existing Spark Java JAR programs as eXtreme Pipelines.
- PySpark Script: Upload your existing PySpark scripts as eXtreme Pipelines.