Skip to end of banner
Go to start of banner

Configuring Hive Accounts

Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 56 Next »

On this Page

Overview

This account is used by the Snaps in the Hive Snap Pack. The account can be configured with and without Kerberos and supports SSL connection.

You can create an account from Designer or Manager. In Designer, when building pipelines, every Snap that needs an account prompts you to create a new account or use an existing account. The accounts can be created in or used from:

  • Your private project folder: This folder contains the pipelines that will use the account.
  • Your Project Space’s shared folder: This folder is accessible to all the users that belong to the Project Space.
  • The global shared folder: This folder is accessible to all the users within an organization in the SnapLogic instance.

Account Configuration

In Manager, you can navigate to the required folder and create an account (see Configuring Hive Accounts). To create an account for a generic JDBC driver: 

  1. If not already done, upload the JDBC driver for this database as a file for the specific project.
  2. Click Create, then select Hive > Hive Database Account or Generic Hive Database Account (as required).
  3. Supply an account label.
  4. Supply the necessary properties for your database. 
  5. Supply the necessary JDBC driver jars for your driver.
  6. (Optional) Supply additional information on this account in the Notes field of the Info tab.
  7. Click Apply

Avoid changing account credentials while pipelines using them are in progress. This may lead to unexpected results, including locking the account.

Account Types 

Hive Database Account

 Account Settings

Label

Required. User provided label for the account instance.

Account properties


Hostname

Required. The server address to connect to. 

Default value: None.

Port number

Required. The database server's port to connect to.

Default value: 10000

Database name

Required. The name of the database which the account is to be connected to.
Default value: None.

Username


The username that is allowed to connect to the database.

Example: Snapuser 

Default value: None.

Password

Password used to connect to the data source. Password will be used as the default password when retrieving connections. The password must be valid in order to set up the data source.

Example: Snapuser 

Default value: None.

JDBC jars

List of JDBC JAR files to be loaded. A different driver binary for a driver must have a different name, the same name can not be reused for a different driver. If this property is left blank, a default JDBC driver will be loaded.

Enter the following JDBC jars to configure the Hive Database account for the respective cluster.

For HDP

  • Hive-jdbc-2.0.0.2.3.5.0-81-standalone.jar

  • Zookeeper-3.4.6.jar (Use this for setting up Hive with Zookeeper)

For CDH

  • hive_metastore.jar
  • hive_service.jar
  • hiveJDBC4.jar
  • libfb303-0.9.0.jar
  • libthrift-0.9.0.jar
  • TCLIServiceClient.jar
  • Zookeeper-3.3.6.jar (Use this for setting up Hive with Zookeeper)

Default value: None

  • The JDBC driver can be uploaded through Designer or Manager and it is stored on a per-project basis. That is, only users with access to that project will see JDBC drivers uploaded. To provide access to all users of your org, place the driver in the /shared project.
  • See Advanced Configurations: Configuring Hive with Kerberos section below for a list of JAR files to be uploaded when configuring Hive with Kerberos.

JDBC Driver Class

Required. The JDBC Driver class name. 

For HDP Clusters

Enter the following value: org.apache.hive.jdbc.HiveDriver

For CDH Clusters

Enter the following value: com.cloudera.hive.jdbc4.HS2Driver

Default value: None.

Advanced Properties


           Auto commit

When selected, each of the batches is committed immediately after it is executed. If the Snap fails, only the batch being executed at that moment is rolled back.

When deselected, the Snap execution output is committed only after all the batches are executed. If the Snap fails, the entire transaction is rolled back, unless the Snap finds invalid input data before it sends the insert request to the server, and routes the error documents to the Error view.

Default value: Selected

For a DB Execute Snap, assume that a stream of documents enter the input view of the Snap and the SQL statement property has JSON paths in the WHERE clause. If the number of documents are large, the Snap executes in more than one batches rather than executing one per each document. Each batch would contain a certain number of WHERE clause values. If Auto commit is turned on, a failure would only roll back the records in the current batch. If Auto commit is turned off, the entire operation would be rolled back. For a single execute statement (with no input view), the setting has no practical effect.

Batch size


Required. Number of statements to execute at a time.
Using a large batch size could use up the JDBC placeholder limit of 2100.

Example: 10

Default value: 50

Fetch size


Required. Number of rows to fetch at a time when executing a query.
Large values could cause the server to run out of memory.

Example: 100

Default value: 100

Max pool size


Required. Maximum number of idle connections a pool will maintain at a time.

Example: 10

Default value: 50

Max life time

Required. Maximum lifetime of a connection in the pool. Ensure that the value you enter is a few seconds shorter than any database or infrastructure-imposed connection time limit. A value of 0 indicates an infinite lifetime, subject to the Idle Timeout value. An in-use connection is never retired. Connections are removed only after they are closed.

Default value: 30

Idle Timeout

Required. The maximum amount of time a connection is allowed to sit idle in the pool. A value of 0 indicates that idle connections are never removed from the pool.

Default value: 5

Checkout timeout

Required. Number of milliseconds to wait for a connection to be available in the pool. Zero waits forever. After set time, then an exception will be thrown and the pipeline will fail.

Example: 10000

Default value: 10000

Url Properties

Properties to use in JDBC Url. These properties will need to be configured when setting up SSL connection. See Advanced Configurations: Configuring Hive with SSL section below for details.

Example: maxAllowedPacket | 1000

Default value: None.

Hadoop properties

Authentication method

Required. Authentication method to use when connecting to the Hadoop service.  

  • None: Allows connection even without the Username and Password
  • Kerberos: Allows connection with Kerberos details such as Client Principal, Keytab file, and Service principal
  • User ID: Allows connection with Username only
  • User ID and Password: Allows connection with Username and Password
  • User ID and Password with SSL: Allows SSL connections with Username and Password. Ensure that you have installed SSL certificates in the JCC node. 

Default value: None

Use Zookeeper 


Specifies if Zookeeper be used to locate the Hadoop service instead of a specific hostname.

If the checkbox is selected, use Zookeeper to resolve the location of the database instead of using the hostname field in the standard block.

Default value: Not selected

When using Zookeeper in combination with a Hive account, add the Zookeeper JAR package file on the Groundplex associated with that Hive account. The version of Zookeeper on the Groundplex should be the same as the version your Hive account uses.

For HDP users, in addition to the zookeeper.jar package, you might also require the curator-client-X.X.X.jar and curator-framework-X.X.X.jar package files on the Groundplex.

Zookeeper URL

If you intend to use Zookeeper, then you must provide the following details:

You must provide the URL of the Zookeeper service. Zookeeper URL formats are different for CDH and HDP.

  • For HDP:
    • Format: hostname1:port,hostname2:port/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
    • Example: na77sl-ihdc-ux02011.clouddev.snaplogic.com:2181,na77sl-ihdc-ux02012.clouddev.snaplogic.com:2181,na77sl-ihdc-ux02013.clouddev.snaplogic.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
  • For CDH:
    • Format: zk=hostname1:port,hostname2:port/hiveserver2
    • Example: Zk = jdbc:hive2://cdhclusterqa-1-1.cloudev.snaplogic.com:2181.cdhclusterqa-1-2.clouddev.snaplogic.com:2181.cdhclusterqa-1-3.clouddev.smaplogic.com:2181/hiveserver2

Default valueNone

This is NOT the URL for the Hadoop service being sought.

Hive properties

JDBC Subprotocol


Conditional. This is required when the Authentication method is KerberosJDBC Subprotocol to be used. The options available are Hive and Impala.

Default value: Hive

Kerberos properties

Configuration information required for the Kerberos authentication. These properties must be configured if you select Kerberos in the Authentication method property.

Client Principal

Used to authenticate to Kerberos KDC (Kerberos Key Distribution Center - Network service used by the clients and servers for authentication). 

Default value: None.

Keytab file

Keytab file(file used to store encryption keys) used to authenticate to Kerberos KDC.

Default valueNone.

Service principal


Principal used by an instance of a service.

Examples: 

  • If you are connecting to a specific server:
    hive/host@REALM or impala/host@REALM
  • If you are connecting(more common for the Snap) to any compliant host (see Use Zookeeper property's description) in which case the principal is:
    'hive/_HOST@REALM' or 'impala/_HOST@REALM'. 

Default value: None.

Generic Hive Database Account

 Account Settings


Label

Required. User provided label for the account instance.

Account properties


Username


Username that is allowed to connect to the database. Username will be used as the default username when retrieving connections. The username must be valid in order to set up the data source.


Example: Snapuser 

Default value: None.

Password

Password used to connect to the data source. Password will be used as the default password when retrieving connections. The password must be valid in order to set up the data source.


Example: Snapuser 

Default value: None.

JDBC URL


The URL of the JDBC database.

Example: jdbc:hive://hostname/dbname:sasl.qop=auth-int

Default value: None.

JDBC jars

List of JDBC JAR files to be loaded. A different driver binary for a driver must have a different name, the same name can not be reused for a different driver. If this property is left blank, a default JDBC driver will be loaded.

Enter the following JDBC jars to configure the Generic Hive Database account for the concerned cluster.

For HDP

  • Hive-jdbc-2.0.0.2.3.5.0-81-standalone.jar

  • Zookeeper-3.4.6.jar (Use this for setting up Hive with Zookeeper)

For CDH

  • hive_metastore.jar
  • hive_service.jar
  • hiveJDBC4.jar
  • libfb303-0.9.0.jar
  • libthrift-0.9.0.jar
  • TCLIServiceClient.jar
  • Zookeeper-3.3.6.jar (Use this for setting up Hive with Zookeeper)

Default value: None

  • The JDBC driver can be uploaded through Designer or Manager and it is stored on a per-project basis. That is, only users with access to that project will see JDBC drivers uploaded. To provide access to all users of your org, place the driver in the /shared project.
  • See Advanced Configurations: Configuring Hive with Kerberos section below for a list of JAR files to be uploaded when configuring Hive with Kerberos.

JDBC Driver Class

Required. The JDBC Driver class name. 

For HDP Clusters

Enter the following value: org.apache.hive.jdbc.HiveDriver

For CDH Clusters

Enter the following value: com.cloudera.hive.jdbc4.HS2Driver

Default value: None.

Advanced Properties


           Auto commit

When selected, each of the batches is committed immediately after it is executed. If the Snap fails, only the batch being executed at that moment is rolled back.

When deselected, the Snap execution output is committed only after all the batches are executed. If the Snap fails, the entire transaction is rolled back, unless the Snap finds invalid input data before it sends the insert request to the server, and routes the error documents to the Error view.

Default value: Selected

For a DB Execute Snap, assume that a stream of documents enter the input view of the Snap and the SQL statement property has JSON paths in the WHERE clause. If the number of documents are large, the Snap executes in more than one batches rather than executing one per each document. Each batch would contain a certain number of WHERE clause values. If Auto commit is turned on, a failure would only roll back the records in the current batch. If Auto commit is turned off, the entire operation would be rolled back. For a single execute statement (with no input view), the setting has no practical effect.

Batch size


Required. Number of statements to execute at a time.
Using a large batch size could use up the JDBC placeholder limit of 2100.

Example: 10

Default value: 50

Fetch size


Required. Number of rows to fetch at a time when executing a query.
Large values could cause the server to run out of memory.

Example: 100

Default value: 100

Max pool size


Required. Maximum number of idle connections a pool will maintain at a time.

Example: 10

Default value: 50

Max idle time

Required. Minutes a connection can exist in the pool before it is destroyed.

Example: 30

Default value: 30

Idle connection Test period

Required. Number of minutes for a connection to remain idle before a test query is run. This helps keep database connections from timing out.
Default value: 5

Checkout timeout

Required. Number of milliseconds to wait for a connection to be available in the pool. Zero waits forever. After set time, then an exception will be thrown and the pipeline will fail.

Example: 10000

Default value: 10000

Url Properties

Properties to use in JDBC Url. These properties will need to be configured when setting up SSL connection. See Advanced Configurations: Configuring Hive with SSL section below for details.

Example: maxAllowedPacket | 1000

Default value: None.

Hadoop Properties

Authentication method

Required. Authentication method to use when connecting to the Hadoop service.  

  • None: Allows connection even without the Username and Password
  • Kerberos: Allows connection with Kerberos details such as Client Principal, Keytab file, and Service principal
  • User ID: Allows connection with Username only
  • User ID and Password: Allows connection with Username and Password

Default value: None

Use Zookeeper 


Specifies if Zookeeper be used to locate the Hadoop service instead of a specific hostname.

If the checkbox is selected, use Zookeeper to resolve the location of the database instead of using the hostname field in the standard block.

Default value: Not selected

Zookeeper Versions

When using Zookeeper in combination with a Hive account, add the Zookeeper JAR package file on the Groundplex associated with that Hive account. The version of Zookeeper on the Groundplex should be the same as the version your Hive account uses.

For HDP users, in addition to the zookeeper.jar package, you might also require the curator-client-X.X.X.jar and curator-framework-X.X.X.jar package files on the Groundplex.

Zoo Keeper URL

If you intend to use Zookeeper, then you must provide the following details:

You must provide the URL of the Zookeeper service. Zookeeper URL formats are different for CDH and HDP.

  • For HDP:
    • Format: hostname1:port,hostname2:port/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
    • Example: na77sl-ihdc-ux02011.clouddev.snaplogic.com:2181,na77sl-ihdc-ux02012.clouddev.snaplogic.com:2181,na77sl-ihdc-ux02013.clouddev.snaplogic.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
  • For CDH:
    • Format: zk=hostname1:port,hostname2:port/hiveserver2
    • Example: Zk = jdbc:hive2://cdhclusterqa-1-1.cloudev.snaplogic.com:2181.cdhclusterqa-1-2.clouddev.snaplogic.com:2181.cdhclusterqa-1-3.clouddev.smaplogic.com:2181/hiveserver2

Default value: None

This is NOT the URL for the Hadoop service being sought.

Hive properties

JDBC Subprotocol


Conditional. This is required when the Authentication method is KerberosJDBC Subprotocol to be used. The options available are Hive and Impala.

Default value: Hive

Kerberos properties

Configuration information required for the Kerberos authentication. These properties must be configured if you select Kerberos in the Authentication method property.

Client Principal

Used to authenticate to Kerberos KDC (Kerberos Key Distribution Center - Network service used by the clients and servers for authentication). 

Default value: None.

Keytab file

Keytab file(file used to store encryption keys) used to authenticate to Kerberos KDC.

Default valueNone.

Service principal


Principal used by an instance of a service.

Examples: 

  • If you are connecting to a specific server:
    hive/host@REALM or impala/host@REALM
  • If you are connecting(more common for the Snap) to any compliant host (see Use Zookeeper property's description) in which case the principal is:
    'hive/_HOST@REALM' or 'impala/_HOST@REALM'. 

Default value: None.

Additional Configurations

Configuring Hive with SSL

Add the following properties to the Url properties table to configure Hive with SSL. These configurations work only with Groundplexes and not Cloudplexes.

URL Property NameURL Property Value
sslRequired. Binary value to denote that SSL is enabled. This value must always be 1.
sslTrustStoreRequired. The path of SSL Trust store key file pointing to a JKS, PEM or CER file. The file can be referenced from the Groundplex's file system.
sslTrustStorePasswordRequired. Password configured for the SSL Trust store.
AllowSelfSignedCertsBinary value to denote whether the driver allows the server to use self-signed SSL certificates. Pass the value 1 to allow the use.
CAIssuedCertNamesMismatchBinary value to denote that the driver requires the CA issued SSL certificate's name to match the host name of the Hive server. Pass the value 1 to indicate the names must match.


The above list is specific to Hive with or without Kerberos enabled. With Kerberos enabled, the properties such as Client Principal, Key tab file and Service principal have to be additionally provided.

Limitations or known issues

"Method not supported" error while validating Apache Hive JDBC v1.2.1 or earlier

The Hive Snap Pack does not validate with Apache Hive JDBC v1.2.1 jars or earlier because of a defect in Hive. HDP 2.6.3 and HDP 2.6.1 run on Apache Hive JDBC v1.2.1 jars.
To validate Snaps that must work with HDP 2.6.3 and HDP 2.6.1, use JDBC v2.0.0 jars.

Testing Environment

  • Hive Version: Hive 1.1.0, Hive 1.2.0

  • Hive with Kerberos works only on Hive JDBC4 driver 2.5.12 and above 

  • Hive with SSL works only on Hive JDBC4 driver 2.5.12 and above.
  • Cluster Versions: CDH 5.16.1, CDH 5.10, HDP 2.6.1, HDP 2.6.3 

Snap Pack History

 Click here to expand...

Release 

Snap Pack Version

Date

Type

  Updates

November 2024

main29029

Stable

Updated and certified against the current SnapLogic Platform release.

August 2024

main27765

Stable

Upgraded the org.json.json library from v20090211 to v20240303, which is fully backward compatible.

May 2024

main26341

Stable

Updated and certified against the current SnapLogic Platform release.

February 2024

main25112

Stable

Updated and certified against the current SnapLogic Platform release.

November 2023

main23721

Stable

Updated and certified against the current SnapLogic Platform release.

August 2023

main22460

Stable

The Hive-Execute Snap now includes a new Query type field. When Auto is selected, the Snap tries to determine the query type automatically.

May 2023

main21015

Stable

The Hive Snap Pack is Cloudera-certified for Cloudera Data Warehouse (CDW). You can use the Hive Execute Snap to work with CDW clusters through a Generic Hive Database account.

February 2023

main19844

09 Feb 2023 

Stable

Upgraded with the latest SnapLogic Platform release.

November 2022

main18944

10 Nov 2022 

Stable

Upgraded with the latest SnapLogic Platform release.

August 2022

main17386

11 Aug 2022  

Stable

Upgraded with the latest SnapLogic Platform release.

4.29

main15993

14 May 2022 

Stable

Upgraded with the latest SnapLogic Platform release.

4.28

main14627

12 Feb 2022 

Stable

Upgraded with the latest SnapLogic Platform release.

4.27

main12833

13 Nov 2021 

Stable

Upgraded with the latest SnapLogic Platform release.

4.26

main11181

14 Aug 2021 

Stable

Upgraded with the latest SnapLogic Platform release.

4.25

main9554

08 May 2021 

Stable

Upgraded with the latest SnapLogic Platform release.

4.24 Patch

424patches8867

11 Mar 2021 

Latest

Fixes the missing library error in Hive Snap Pack when running Hadoop Pipelines in JDK11 runtime.

4.24

main8556

13 Feb 2021

Stable

Upgraded with the latest SnapLogic Platform release.

4.23

main7430

14 Nov 2020 

Stable

Upgraded with the latest SnapLogic Platform release.

4.22

main6403

12 Sep 2020 

Stable

Upgraded with the latest SnapLogic Platform release.

4.21 Patch

421patches6272

27 Jul 2020 

Latest

Fixes the issue where Snowflake SCD2 Snap generates two output documents despite no changes to Cause-historization fields with DATE, TIME and TIMESTAMP Snowflake data types, and with Ignore unchanged rows field selected.

4.21 Patch

421patches6144

02 Jul 2020 

Latest

Fixes the following issues with DB Snaps:

  • The connection thread waits indefinitely causing the subsequent connection requests to become unresponsive.

  • Connection leaks occur during Pipeline execution.

4.21 Patch

421patches5851

08 Jun 2020 

Latest

Fixes the Hive Execute Snap that fails with a java.lang.NullPointerException error.

4.21 Patch

MULTIPLE8841

19 May 2020 

Latest

Fixes the connection issue in Database Snaps by detecting and closing open connections after the Snap execution ends.

4.21

snapsmrc542

09 May 2020 

Stable

Upgraded with the latest SnapLogic Platform release.

4.20

snapsmrc535

08 Feb 2020 

Stable

Upgraded with the latest SnapLogic Platform release.

4.19

snaprsmrc528

14 Nov 2019 

Stable

Upgraded with the latest SnapLogic Platform release.

4.18

snapsmrc523

10 Aug 2019 

Stable

Upgraded with the latest SnapLogic Platform release.

4.17

ALL7402

11 Jun 2019 

Latest

Pushed automatic rebuild of the latest version of each Snap Pack to SnapLogic UAT and Elastic servers.

4.17

snapsmrc515

11 Jun 2019 

Stable

  • Certified and tested the Snap Pack against CDH 6.1.

  • Fixes an issue with the Hive Execute Snap wherein the Snap would send the input document to the output view even if the Pass through field is not selected in the Snap configuration. With this fix, the Snap sends the input document to the output view, under the key original, only if you select the Pass through field.

  • Adds the Snap Execution field to all Standard-mode Snaps. In some Snaps, this field replaces the existing Execute during preview check box.

  • Adds a new authentication method, User ID and Password with SSL, for Hive SSL Accounts which allows SSL connections for valid user name and password credentials.

4.16

snapsmrc508

16 Feb 2019 

Stable

Upgraded with the latest SnapLogic Platform release.

4.15 Patch 

db/hive6330

05 Dec 2018 

Latest

Replaced Max idle time and Idle connection test period properties with Max life time and Idle Timeout properties respectively, in the Account configuration. The new properties fix the connection release issues that were occurring due to default/restricted DB Account settings.

4.15

snapsmrc500

15 Dec 2018 

Stable

Added Hive HA support for Zookeeper.

4.14

snapsmrc490

11 Aug 2018 

Stable

Added a new account type: Generic Hive Database Account, this enables connecting to different types of clusters using JDBC URL.

4.13 Patch 

db/hive5269

07 Jun 2018 

Latest

Fixes the Hive Execute Snap that stores account passwords in plain text in the log file. 

4.13

snapsmrc486

12 May 2018 

Stable

Upgraded with the latest SnapLogic Platform release.

4.12

snapsmrc480

17 Feb 2018 

Stable

Upgraded with the latest SnapLogic Platform release.

4.11

snapsmrc465

11 Nov 2017 

Stable

Upgraded with the latest SnapLogic Platform release.

4.10

snapsmrc414

12 Aug 2017 

Stable

Upgraded with the latest SnapLogic Platform release.

4.9 Patch

hive3068

01 Jun 2017 

Latest

  • Fixes an issue regarding connection not closed after login failure; Expose autocommit for "Select into" statement in PostgreSQL Execute Snap and Redshift Execute Snap

4.9

snapsmrc405

13 May 2017 

Stable

  • Hive - Execute Snap is tested on Cloudera Version 5.8.

  • Hive - Execute Snap(Kerberos) now works on Groundplex.

4.8 Patch

hive2752

27 Mar 2017 

Latest

Potential fix for JDBC deadlock issue.

4.8

snapsmrc398

11 Feb 2017 

Stable

  • Info tab added to accounts.

  • Database accounts now invalidate connection pools if account properties are modified and login attempts fail.

4.7.0 Patch

hive2469

17 Jan 2017 

Latest

Addresses an issue with ClouderaHiveJDBCDriver(500168) Unable to connect to server: GSS initiate failed, Fixes by changing the connection pooling to Hikari and added privileged user to all getConnect() request.

4.7.0 Patch

hive2199

28 Nov 2016 

Latest

Fixes an issue for database Select Snaps regarding Limit rows not supporting an empty string from a pipeline parameter.

4.7

snapsmrc382

23 Nov 2016 

Stable

  • The editor box for the SQL statement property in certain database Snaps can now be resized to make it easier to read the contents. This setting is in the Execute Snaps for Cassandra, Hive, JDBC, Oracle, MySQL, SQL Server, PostgreSQL, SAP HANA, Vertica, and Teradata.

  • Enabled the Hive account with Kerberos authentication (Hive with Kerberos works only on Hive JDBC4 driver 2.5.12 and above).

4.6 Patch

hive1958

05 Oct 2016 

Latest

Resolved a performance issue with Hive Execute and JDBC Execute Snaps when running Hive Queries.

4.6

snapsmrc362

13 Aug 2016 

Stable

Snap Pack introduced in 4.6.0. This includes only a Hive Execute Snap that executes DML and DDL statements with Kerberos enabled. It does not include Snaps for load, select, insert, delete, execute or others at this time. Tested only on Cloudera CDH 5.3 & 5.5, Hortonworks HDP 2.3.4.




  • No labels