On this Page
Table of Contents | ||||||
---|---|---|---|---|---|---|
|
This account is used by the Snaps in the JDBC Snap Pack.
You can create an account from Designer or Manager. In Designer, when working on pipelines, every Snap that needs an account prompts you to create a new account or use an existing account. The accounts can be created in or used from:
- Your private project folder: This folder contains the pipelines that will use the account.
- Your Project Space’s shared folder: This folder is accessible to all the users that belong to the Project Space.
- The global shared folder: This folder is accessible to all the users within an organization in the SnapLogic instance.
Account Configuration
In Manager, you can navigate to the required folder and create an account in it (see Accounts). To create an account for a generic JDBC driver:
- If you have not already done so, upload the JDBC driver for this database as a file for the specific project.
- Click Create, then select JDBC > Generic Database Account.
- Supply an account label.
- Supply the account properties for your database.
- (Optional) Supply additional information on this account in the Notes field of the Info tab.
- Click Apply.
Warning |
---|
Avoid changing account credentials while pipelines using them are in progress. This may lead to unexpected results, including locking the account. |
Note |
---|
|
Account Types
Generic Database Account
Account Settings
Label | Required. User provided label for the account instance. | |||
---|---|---|---|---|
Account Properties | Required. | |||
JDBC Driver | Required. Select the JDBC driver to use. Type 3 and Type 4 JDBC drivers are only supported.
Example: vertica-jdk5-6.1.2-0.jar Default value: [None] | |||
JDBC Driver Class | Required. The JDBC Driver class name to use. Example: com.vertica.jdbc.Driver Default value: [None] | |||
JDBC Url | JDBC URL to use. Example: jdbc:vertica://Snaplogic.com/database Default value: [None] | |||
Username | Database username to use. Example: Snapuser Default value: [None] | |||
Password | Database password to use. Example: Snapuser Default value: [None] | |||
Database name | Select a Database to connect to using the account. The available options are: Auto detect, PostgreSQL, Redshift, MySQL, Oracle, SQL Server 2012, SQL Server 2008, SAPHana, Apache Hive, DB2, SQLMX, Apache Derby, and Spark SQL. If you use PostgreSQL JDBC driver to connect to Redshift database, the Auto detect option automatically detects the PostgreSQL database instead of Redshift. The behavior of the JDBC Snap is optimized for the selected database. Example: Oracle | |||
Test Query | Enter a custom Test Query to validate the database connection on selecting Auto detect for Database name.
Example: Select 1 | |||
Advanced Properties | Required. | |||
Min pool size | Required. Minimum number of idle connections a pool will maintain at a time. | |||
Max pool size | Required. Maximum number of idle connections a pool will maintain at a time. Example: 10 Default value: 15 | |||
Max idle time | Required. Seconds a connection will exist in the pool before being destroyed. Example: 300 Default value: 60 Minutes | |||
Checkout timeout | Required. Number of milliseconds to wait for a connection to be available in the pool. Zero waits forever. After set time, then an exception will be thrown and the pipeline will fail. Example: 10000 Default value: 10000 | |||
Number of helper threads | Required. Number of thread to help execute operations. Increasing the threads can improve performance. Example: 3 Default value: 3 | |||
Url Properties | Properties to use in JDBC url. Example: maxAllowedPacket | 1000 Default value: [None] | |||
Auto commit | If selected, then batches are immediately committed after they execute. Therefore, only the current executing batch will be rolled back if the Snap fails. If not selected, then a transaction is started for the Snap run and committed upon run success. The transaction will be rolled back if the Snap fails. | |||
Fetch size | Required. Number of records to retrieve from the DB at a time. Example: 100 Default value: 100 | |||
Batch size | Required. Number of statements to execute at a time. Example: 10 Default value: 50 |
Account Encryption
Standard Encryption | If you are using Standard Encryption, the High sensitivity settings under Enhanced Encryption are followed. | |
---|---|---|
Enhanced Encryption | If you have the Enhanced Account Encryption feature, the following describes which fields are encrypted for each sensitivity level selected for this account. Account:
|
Auto Commit with Execute Snaps
For a DB Execute Snap, assume that a stream of documents enters the input view of the Snap and the SQL statement property has JSON paths in the WHERE clause. If the number of documents are large, the Snap executes in more than one batches rather than executing one per each document. Each batch would contain a certain number of WHERE clause values. If Auto commit is turned on, a failure would only roll back the records in the current batch. If Auto commit is turned off, the entire operation would be rolled back. For a single execute statement (with no input view), the setting has no practical effect.
Active Directory authentication
SnapLogic supports Active Directory authentication for SQL server for driver JAR version mssql-jdbc-6.2.2-jre8.jar
. Ensure that you have installed mssql-jdbc-6.2.2-jre8.jar
file. Also, use the following configurations in Account Settings to configure Active Directory authentication:
- JDBC Driver class: com.microsoft.sqlserver.jdbc.SQLServerDriver
- JDBC Connection URL: jdbc:sqlserver://ServerNameFQDN:portNumber;databaseName=DBNAME
SnapLogic supports Active Directory authentication for SQL Server using the User impersonation method. The prerequisites are as follows:
In the account settings, add the following to Url property name and Url property value:
Url property nameUrl property valueIntegratedSecurity True AuthenticationScheme JavaKerberos In the account settings, enter your Active Directory Username and Password.
Connecting to Cassandra Database Using Cassandra Simba Driver
To connect to Cassandra database using the Cassandra Simba driver ensure that the appropriate Simba JAR files are installed based on the version of JDK in the Snaplex node. Contact your Org admin to know the JDK in the Snaplex node. To see which JAR version is to be used, see JDBC Install Guide for details.
Aside from the JAR files, use the following configurations in the Account Settings when using the Cassandra Simba driver:
- JDBC Driver class: "com.simba.cassandra.jdbc42.Driver"
- JDBC URL: jdbc:cassandra://<host>:<port>;AuthMech=1;UID=<user id>;PWD=<password>;DefaultKeyspace=<database name>
- Database name: Auto detect
Note | ||
---|---|---|
| ||
|
Examples
This section provides examples of JDBC connection details for different sources. Note that specifics may vary based on operating system or database version.
Teradata
- JDBC Driver Jar: terajdbc4_13.jar, tdgssconfig_13.jar
- JDBC Driver Class: com.teradata.jdbc.TeraDriver
- JDBC URL: jdbc:teradata://<host>:<port>/TMODE=ANSI,SHARSET=UTF8,DBS_PORT=1025
SAP Hana
- JDBC Driver Jar: ngdbc.jar
- JDBC Driver Class: com.sap.db.jdbc.Driver
- JDBC URL: jdbc:sap://<host>:<port>/?currentschema=<your HANA Schema>
Sybase
- JDBC Driver Jar: jconn4.jar
- JDBC Driver Class: com.sybase.jdbc4.jdbc.SybDriver
- JDBC URL: dbc:sybase:Tds:<host>:<port2048>/<database>
JDBC ODBC Bridge
- JDBC Driver Jar:
- JDBC Driver Class: sun.jdbc.odbc.JdbcOdbcDriver
- JDBC URL: jdbc:odbc:ODBCConnectionName
Hive
Varies based on version.
See https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients#HiveServer2Clients-JDBC for general information on Hive JDBC Drivers for HiveServer2.
- JDBC Driver Jar: Varies based on version. Hive is usually provided as part of a larger Hadoop cluster based on products from Cloudera, Hortonworks, or Amazon. You must use the appropriate drivers for your Hive instance.
- Cloudera: The necessary driver and support jars can be acquired from http://www.cloudera.com/downloads/connectors/hive/jdbc/2-5-4.html.
Hortonworks: The necessary driver and support jars can be acquired from https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.0/bk_dataintegration/content/hive-jdbc-odbc-drivers.html.
Amazon EMR: The necessary driver and support jars can be acquired from http://docs.aws.amazon.com/ElasticMapReduce/latest/DeveloperGuide/HiveJDBCDriver.html.
- JDBC Driver Class: com.cloudera.hive.jdbc4.HS2Driver
- JDBC URL: jdbc:hive2://<host>:<port>
Informix
You will need to log into Informix with your IBM account to be able to download the Informix JDBC drivers and upload them into SnapLogic.
Insert excerpt | ||||||
---|---|---|---|---|---|---|
|