Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.


In this Page

...

Error Code: ERROR_STATE, SQL state: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask, Query: ......

...

Panel

Symptom:

The attempt to access Kerberos enabled Hadoop Distributed File System (HDFS) on any cluster host fails with error message "SIMPLE authentication is not enabled" though all Kerberos parameters are configured correctly. 

Error Message:

The following exception is displayed as the error message for the Kerberos authentication enable HDFS Snaps. 

java.util.concurrent.ExecutionException: java java.io.IOException:
Failed on local exception: java java.io.IOException: org.apache.hadoop.security.AccessControlException: 
Client cannot authenticate via:[TOKEN, KERBEROS];


Cause:

All hosts that participate in the Kerberos authentication system must have their internal clocks synchronized within a specified maximum amount of time (known as clock skew). This requirement provides another Kerberos security check. If the clock skew is exceeded between any of the participating hosts, client requests are rejected.

Resolution:

Maintaining synchronized clocks between the KDCs and Kerberos clients is important, please use the Network Time Protocol (NTP) software to synchronize them.


Panel

Symptom:

The attempt to access Kerberos enabled Hadoop Distributed File System (HDFS) on any cluster host fails with error message "Server has invalid Kerberos principal:" though all Kerberos parameters are configured correctly. 

Error Message:

The following exception is displayed as the error message. 

Failed on local exception: java java.io.IOException: 
java.lang.IllegalArgumentException: Server has invalid Kerberos principal: 
hdfs/cdhclusterqa-2-1.clouddev.snaplogic.com@CLOUDDEVcom@CLOUDDEV.SNAPLOGIC.COM; 
Host Details : local host is: "<LOCALHOST>/127.0.0.1"; 
destination host is: "cdh2-1.devsnaplogic.com":8020;

Cause:

When Kerberos Authentication is configured on the HDFS Server, the following properties are added 


Code Block
languagexml
<property>
  <name>dfs.namenode.kerberos.principal</name>
  <value>hdfs/_HOST@YOUR-REALM.COM</value>
</property>

<property>
  <name>dfs.datanode.kerberos.principal</name>
  <value>hdfs/_HOST@YOUR-REALM.COM</value>
</property>


The special string _HOST in the properties is replaced at run-time by the fully-qualified domain name of the host machine where the daemon is running. This requires that reverse DNS is properly working on all the hosts configured this way.

Resolution:

One potential cause for this issue is, the "_HOST" has multiple host names and the hostname provided in the "Service Principle" in the Snap Kerberos configuration is not matching with the hostname resolved on the Namenode or DataNode. In the error message, the server's service principle is displayed, please make sure that the same service principle is provided in the Snap Kerberos Configuration.

More details can be found at 

You can use the HadoopDNSResolver tool to verify the DNS Names. Details on the usage of the tool are provided in the same page.

...

Panel

Symptom:

  • HDFS Reader Snap times out to read the data, even after all the credentials provided correctly,
  • HDFS Writer Snap times out in writing the data, even after all the credentials provided correctly,


Error Message:

The following exception is displayed as the error message for the Kerberos authentication enable HDFS Writer. 

java.lang.Thread.State: WAITING
at java.lang.Object.wait(Object.java:-1)
at org.apache.hadoop.hdfs.DFSOutputStream.waitForAckedSeqno(DFSOutputStream.java:2119)
at org.apache.hadoop.hdfs.DFSOutputStream.flushInternal(DFSOutputStream.java:2101)
at org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:2232)
- locked <0x2fca> (a org.apache.hadoop.hdfs.DFSOutputStream)
at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2204)
at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
at javaat java.io.FilterOutputStream.close(FilterOutputStream.java:159)

Cause:

This can be caused by the following reasons. 

  • Data Node not responding to the Groundplex requests 
  • Security or Firewall settings are blocking the access from the Groundplex to the Ports on DataNode


Resolution:

The edge node, on which the Groundplex is executing should be able to access all the standard Hadoop ports. Here is the Hadoop default ports for various distributions.