Label | Required. User provided label for the account instance. |
---|
Account properties |
|
---|
Hostname | Required. The server address to connect to. Default value: None. |
---|
Port number | Required. The database server's port to connect to. Default value: 10000 |
---|
Database name | Required. The name of the database which the account is to be connected to. Default value: None. |
---|
Username
| The username that is allowed to connect to the database. Example: Snapuser Default value: None. |
---|
Password | Password used to connect to the data source. Password will be used as the default password when retrieving connections. The password must be valid in order to set up the data source. Example: Snapuser Default value: None. |
---|
JDBC jars | List of JDBC JAR files to be loaded. A different driver binary for a driver must have a different name, the same name can not be reused for a different driver. If this property is left blank, a default JDBC driver will be loaded. Enter the following JDBC jars to configure the Hive Database account for the respective cluster. For HDP For CDH - hive_metastore.jar
- hive_service.jar
- hiveJDBC4.jar
- libfb303-0.9.0.jar
- libthrift-0.9.0.jar
- TCLIServiceClient.jar
- Zookeeper-3.3.6.jar (Use this for setting up Hive with Zookeeper)
Default value: None Note |
---|
- The JDBC driver can be uploaded through Designer or Manager and it is stored on a per-project basis. That is, only users with access to that project will see JDBC drivers uploaded. To provide access to all users of your org, place the driver in the /shared project.
- See Advanced Configurations: Configuring Hive with Kerberos section below for a list of JAR files to be uploaded when configuring Hive with Kerberos.
|
|
---|
JDBC Driver Class | Required. The JDBC Driver class name. For HDP Clusters Enter the following value: org.apache.hive.jdbc.HiveDriver For CDH Clusters Enter the following value: com.cloudera.hive.jdbc4.HS2Driver Default value: None. |
---|
Advanced Properties |
|
---|
Auto commit | If selected, then batches are immediately committed after they execute. Therefore, only the current executing batch will be rolled back if the Snap fails.
If not selected, then a transaction is started for the Snap run and committed upon run success. The transaction will be rolled back if the Snap fails. Default value: Selected
Note |
---|
For a DB Execute Snap, assume that a stream of documents enter the input view of the Snap and the SQL statement property has JSON paths in the WHERE clause. If the number of documents are large, the Snap executes in more than one batches rather than executing one per each document. Each batch would contain a certain number of WHERE clause values. If Auto commit is turned on, a failure would only roll back the records in the current batch. If Auto commit is turned off, the entire operation would be rolled back. For a single execute statement (with no input view), the setting has no practical effect. |
|
---|
Batch size
| Required. Number of statements to execute at a time. Using a large batch size could use up the JDBC placeholder limit of 2100. Example: 10 Default value: 50 |
---|
Fetch size
| Required. Number of rows to fetch at a time when executing a query. Large values could cause the server to run out of memory. Example: 100 Default value: 100 |
---|
Max pool size
| Required. Maximum number of idle connections a pool will maintain at a time. Example: 10 Default value: 50 |
---|
Max idle life time
| Required. Minutes a connection can exist in the pool before it is destroyed.Example: 30 Default value: 30 | Idle Connection Test period
Required. Number of minutes for a connection to remain idle before a test query is run. This helps keep database connections from timing out Maximum lifetime of a connection in the pool. Ensure that the value you enter is a few seconds shorter than any database or infrastructure-imposed connection time limit. A value of 0 indicates an infinite lifetime, subject to the Idle Timeout value. An in-use connection is never retired. Connections are removed only after they are closed. Default value: 30 |
---|
Idle Timeout
| Required. The maximum amount of time a connection is allowed to sit idle in the pool. A value of 0 indicates that idle connections are never removed from the pool. Default value: 5 |
---|
Checkout timeout
| Required. Number of milliseconds to wait for a connection to be available in the pool. Zero waits forever. After set time, then an exception will be thrown and the pipeline will fail. Example: 10000 Default value: 10000 |
---|
Url Properties | Properties to use in JDBC Url. These properties will need to be configured when setting up SSL connection. See Advanced Configurations: Configuring Hive with SSL section below for details. Example: maxAllowedPacket | 1000 Default value: None. |
---|
Hadoop properties |
|
---|
Authentication method | Required. Authentication method to use when connecting to the Hadoop service. - None: Allows connection even without the Username and Password
- Kerberos: Allows connection with Kerberos details such as Client Principal, Keytab file, and Service principal
- User ID: Allows connection with Username only
- User ID and Password: Allows connection with Username and Password
Default value: None |
---|
Use Zookeeper
| Specifies if Zookeeper be used to locate the Hadoop service instead of a specific hostname. If the checkbox is selected, use Zookeeper to resolve the location of the database instead of using the hostname field in the standard block. Default value: Not selected
Note |
---|
When using Zookeeper in combination with a Hive account, add the Zookeeper JAR package file on the Groundplex associated with that Hive account. The version of Zookeeper on the Groundplex should be the same as the version your Hive account uses. For HDP users, in addition to the zookeeper.jar package, you might also require the curator-client-X.X.X.jar and curator-framework-X.X.X.jar package files on the Groundplex. |
|
---|
Zookeeper URL
| If you intend to use Zookeeper, then you must provide the following details: You must provide the URL of the Zookeeper service. Zookeeper URL formats are different for CDH and HDP. Default value: None
Note |
---|
This is NOT the URL for the Hadoop service being sought. |
|
---|
Hive properties |
|
---|
JDBC Subprotocol
| Conditional. This is required when the Authentication method is Kerberos. JDBC Subprotocol to be used. The options available are Hive and Impala. Default value: Hive |
---|
Kerberos properties | Configuration information required for the Kerberos authentication. These properties must be configured if you select Kerberos in the Authentication method property. |
---|
Client Principal | Used to authenticate to Kerberos KDC (Kerberos Key Distribution Center - Network service used by the clients and servers for authentication). Default value: None. |
---|
Keytab file | Keytab file(file used to store encryption keys) used to authenticate to Kerberos KDC. Default value: None. |
---|
Service principal
| Principal used by an instance of a service. Examples: - If you are connecting to a specific server:
hive/host@REALM or impala/host@REALM - If you are connecting(more common for the Snap) to any compliant host (see Use Zookeeper property's description) in which case the principal is:
'hive/_HOST@REALM' or 'impala/_HOST@REALM'.
Default value: None. |
---|