Chinaunix首页 | 论坛 | 博客
  • 博客访问: 371456
  • 博文数量: 85
  • 博客积分: 0
  • 博客等级: 民兵
  • 技术积分: 657
  • 用 户 组: 普通用户
  • 注册时间: 2013-07-17 20:48
个人简介

行到水穷处,坐看云起时

文章分类

全部博文(85)

文章存档

2019年(2)

2018年(1)

2016年(1)

2015年(66)

2014年(15)

我的朋友

分类: HADOOP

2018-11-18 23:09:45


本文转自:http://blog.cloudera.com/blog/2017/12/hadoop-delegation-tokens-explained/


Apache Hadoop’s security was designed and implemented around 2009, and has been stabilizing since then. However, due to a lack of documentation around this area, it’s hard to understand or debug when problems arise. Delegation tokens were designed and are widely used in the Hadoop ecosystem as an authentication method. This blog post introduces the concept of Hadoop Delegation Tokens in the context of Hadoop Distributed File System () and Hadoop Key Management Server (), and provides some basic code and troubleshooting examples. It is noteworthy that there are a lot of other services in the Hadoop ecosystem that also utilize delegation tokens, but for brevity we will only discuss about HDFS and KMS.

This blog assumes the readers understand the basic concept of , for the purpose of understanding the authentication flow; as well as  and HDFS Transparent Encryption, for the purpose of understanding what HDFS and KMS are. For readers who are not interested in HDFS transparent encryption, the KMS portions in this blog post can be ignored. A previous blog post about general Authorization and Authentication in Hadoop can be found here.

Quick Introduction to Hadoop Security

Hadoop was initially implemented without real authentication, which means data stored in Hadoop could be easily compromised. The security feature was added on later via  in 2010 with the following two fundamental goals:

  1. Preventing the data stored in HDFS from unauthorized access
  2. Not adding significant cost while achieving goal #1.

In order to achieve the first goal, we need to ensure

  • Any clients accessing the cluster are authenticated to ensure they are who they claimed to be.
  • Any servers of the cluster are authenticated to be part of the cluster.

For this goal, Kerberos was chosen as the underlying authentication service. Other mechanisms such as Delegation Token, Block Access Token, Trust etc. are added to complement Kerberos. Especially, Delegation Token is introduced to achieve the second goal (see next section for how). Below is a simplified diagram that illustrates where Kerberos and Delegation Tokens are used in the context of HDFS (other services are similar):

Figure 1: Simplified Diagram of HDFS Authentication Mechanisms 
                 Figure 1: Simplified Diagram of HDFS Authentication Mechanisms
In a simple HDFS example above, there are several authentication mechanisms in play:

  • The end user (joe) can talk to the HDFS NameNode using Kerberos
  • The distributed tasks the end user (joe) submits can access the HDFS NameNode using joe’s Delegation Tokens. This will be the focus for the rest of this blog post
  • The HDFS DataNodes talk to the HDFS NameNode using Kerberos
  • The end user and the distributed tasks can access HDFS DataNodes using Block Access Tokens.

We will give a brief introduction of some of the above mechanisms in the Other Ways Tokens Are Used section at the end of this blog post. To read about more details of Hadoop security design, please refer to the design doc in  and the Hadoop Security Architecture Presentation.

What is a Delegation Token?

While it’s theoretically possible to solely use Kerberos for authentication, it has its own problem when being used in a distributed system like Hadoop. Imagine for each MapReduce job, if all the worker tasks have to authenticate via Kerberos using a delegated TGT (Ticket Granting Ticket), the  (KDC) would quickly become the bottleneck. The red lines in the graph below demonstrates the issue: there could be thousands of node-to-node communications for a job, resulting in the same magnitude of KDC traffic.  In fact, it would inadvertently be performing a  on the KDC in very large clusters.
Figure 2: Simplified Diagram Showing The Authentication Scaling Problem in Hadoop
               Figure 2: Simplified Diagram Showing The Authentication Scaling Problem in Hadoop

Thus Delegation Tokens were introduced as a lightweight authentication method to complement Kerberos authentication. Kerberos is a three-party protocol; in contrast, Delegation Token authentication is a two-party authentication protocol.

The way Delegation Tokens works is:

  1. The client initially authenticates with each server via Kerberos, and obtains a Delegation Token from that server.
  2. The client uses the Delegation Tokens for subsequent authentications with the servers instead of using Kerberos.

The client can and very often does pass Delegation Tokens to other services (such as YARN) so that the other services (such as the mappers and reducers) can authenticate as, and run jobs on behalf of, the client. In other words, the client can ‘delegate’ credentials to those services. Delegation Tokens have an expiration time and require periodic renewals to keep their validity. However, they can’t be renewed indefinitely – there is a max lifetime. Before it expires, the Delegation Token can be cancelled too.

Delegation Tokens eliminate the need to distribute a Kerberos TGT or keytab, which, if compromised, would grant access to all services. On the other hand, a Delegation Token is strictly tied to its associated service and eventually expires, causing less damage if exposed. Moreover, Delegation Tokens make credential renewal more lightweight. This is because the renewal is designed in such a way that only the renewer and the service are involved in the renewal process. The token itself remains the same, so all parties already using the token do not have to be updated.

For availability reasons, the Delegation Tokens are persisted by the servers. HDFS NameNode persists the Delegation Tokens to its metadata (aka. fsimage and edit logs). KMS persists the Delegation Tokens into Apache ZooKeeper, in the form of ZNodes. This allows the Delegation Tokens to be usable even if the servers restart or failover.

The server and the client have different responsibilities for handling Delegation Tokens. The two sections below provide some details.

Delegation Tokens at the Server-side

The server (that is, HDFS NN and KMS in Figure 2.) is responsible for:

  • Issuing Delegation Tokens, and storing them for verification.
  • Renewing Delegation Tokens upon request.
  • Removing the Delegation Tokens either when they are canceled by the client, or when they expire.
  • Authenticating clients by verifying the provided Delegation Tokens against the stored Delegation Tokens.

Delegation Tokens in Hadoop are generated and verified following the  mechanism. There are two parts of information in a Delegation Token: a public part and a private part. Delegation Tokens are stored at the server side as a hashmap with the public information as the key and the private information as the value.

The public information is used for token identification in the form of an identifier object. It consists of:

 Kind  The kind of token (HDFS_DELEGATION_TOKEN, or kms-dt). The token also contains the kind, matching the identifier’s kind.
 Owner  The user who owns the token.
 Renewer  The user who can renew the token.
 Real User  Only relevant if the owner is impersonated. If the token is created by an impersonating user, this will identify the impersonating user.
 For example, when oozie impersonates user joe, Owner will be joe and Real User will be oozie.
 Issue Date  Epoch time when the token was issued.
 Max Date  Epoch time when the token can be renewed until.
 Sequence Number  UUID to identify the token.
 Master Key ID  ID of the master key used to create and verify the token.
Table 1: Token Identifier (public part of a Delegation Token)

The private information is represented by class DelegationTokenInformation inAbstractDelegationTokenSecretManager, it is critical for security and contains the following fields:

 renewDate  Epoch time when the token is expected to be renewed. 
 If it’s smaller than the current time, it means the token has expired.
 password  The password calculated as HMAC of Token Identifier using master key as the HMAC key. 
 It’s used to validate the Delegation Token provided by the client to the server.
 trackingId  A tracking identifier that can be used to associate usages of a token across multiple client sessions.
 It is computed as the MD5 of the token identifier.
Table 2: Delegation Token Information (private part of a Delegation Token)

Notice the master key ID in Table 1, which is the ID of the master key living on the server. The master key is used to generate every Delegation Token. It is rolled at a configured interval, and never leaves the server.

The server also has a configuration to specify a renew-interval (default is 24 hours), which is also the expiration time of the Delegation Token. Expired Delegation Tokens cannot be used to authenticate, and the server has a background thread to remove expired Delegation Tokens from the server’s token store.

Only the renewer of the Delegation Token can renew it, before it expires. A successful renewal extends the Delegation Token’s expiration time for another renew-interval, until it reaches its max lifetime (default is 7 days).

The table in Appendix A gives the configuration names and their default values for HDFS and KMS.

Delegation tokens at the client-side

The client is responsible for

  • Requesting new Delegation Tokens from the server. A renewer can be specified when requesting the token.
  • Renewing Delegation Tokens (if the client specifies itself as the ‘renewer’), or ask another party (the specified ‘renewer’) to renew Delegation Tokens.
  • Requesting the server to cancel Delegation Tokens.
  • Present a Delegation Token to authenticate with the server.

The client-side-visible Token class is defined here. The following table describes what’s contained in the Token.

 identifier  The token identifier matching the public information part at the server side.
 password  The password matching the password at the server side.
 kind  The kind of token (e.g. HDFS_DELEGATION_TOKEN, or kms-dt), it matches the identifier’s kind.
 service  The name of the service (e.g. ha-hdfs: for HDFS, : for KMS).
 renewer  The user who can renew the token (e.g. yarn).
Table 3: Delegation Token at Client Side

How authentication with Delegation Tokens works is explained in the next section.

Below is an example log excerpt at job submission time. The INFO logs are printed with tokens from all services. In the example below, we see an HDFS Delegation Token, and a KMS Delegation Token.

$ hadoop jar /opt/cloudera/parcels/CDH/jars/hadoop-mapreduce-examples-2.6.0-cdh5.13.0.jar pi 2 3
Number of Maps  = 2
Samples per Map = 3
Wrote input for Map #0
Wrote input for Map #1
Starting Job
17/10/22 20:50:03 INFO client.RMProxy: Connecting to ResourceManager at example.cloudera.com/172.31.113.88:8032
17/10/22 20:50:03 INFO hdfs.DFSClient: Created token for xiao: HDFS_DELEGATION_TOKEN owner=xiao@GCE.CLOUDERA.COM, renewer=yarn, realUser=, issueDate=1508730603423, maxDate=1509335403423, sequenceNumber=4, masterKeyId=39 on ha-hdfs:ns1
17/10/22 20:50:03 INFO security.TokenCache: Got dt for hdfs://ns1; Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:ns1, Ident: (token for xiao: HDFS_DELEGATION_TOKEN owner=xiao@GCE.CLOUDERA.COM, renewer=yarn, realUser=, issueDate=1508730603423, maxDate=1509335403423, sequenceNumber=4, masterKeyId=39)
17/10/22 20:50:03 INFO security.TokenCache: Got dt for hdfs://ns1; Kind: kms-dt, Service: 172.31.113.88:16000, Ident: (kms-dt owner=xiao, renewer=yarn, realUser=, issueDate=1508730603474, maxDate=1509335403474, sequenceNumber=7, masterKeyId=69)

For readers who want to write Java code to authenticate, a sample code is provided at Appendix B.

Example: Delegation Tokens’ Lifecycle

Now that we understand what a Delegation Token is, let’s take a look at how it’s used in practice. The graph below shows an example flow of authentications for running a typical application, where the job is submitted through YARN, then is distributed to multiple worker nodes in the cluster to execute.Figure 3: How Delegation Token is Used for Authentication In A Typical Job
             Figure 3: How Delegation Token is Used for Authentication In A Typical Job

For brevity, the steps of Kerberos authentication and the details about the task distribution are omitted. There are 5 steps in general in the graph:

  1. The client wants to run a job in the cluster. It gets an HDFS Delegation Token from HDFS NameNode, and a KMS Delegation Token from the KMS.
  2. The client submits the job to the YARN Resource Manager (RM), passing the Delegation Tokens it just acquired, as well as the .
  3. YARN RM verifies the Delegation Tokens are valid by immediately renewing them. Then it launches the job, which is distributed (together with the Delegation Tokens) to the worker nodes in the cluster.
  4. Each worker node authenticates with HDFS using the HDFS Delegation Token as they access HDFS data, and authenticates with KMS using the KMS Delegation Token as they decrypt the HDFS files in encryption zones.
  5. After the job is finished, the RM cancels the Delegation Tokens for the job.

Note: A step not drawn in the above diagram is, RM also tracks each Delegation Token’s expiration time, and renews the Delegation Token when it’s at 90% of the expiration time. Note that Delegation Tokens are tracked in the RM on an individual basis, not Token-Kind basis. This way, tokens with different renewal intervals can all be renewed correctly. The token renewer classes are implemented using Java ServiceLoader, so RM doesn’t have to be aware of the token kinds. For extremely interested readers, the relevant code is in RM’s DelegationTokenRenewer class.

What is this InvalidToken error?

Now we know what Delegation Tokens are and how they are used when running a typical job. But don’t stop here! Let’s look at some typical Delegation Token related error messages from application logs, and figure out what they mean.

Token is expired

Sometimes, the application fails with AuthenticationException, with an InvalidToken exception wrapped inside. The exception message indicates that “token is expired”. Guess why this could happen?

….
17/10/22 20:50:12 INFO mapreduce.Job: Job job_1508730362330_0002 failed with state FAILED due to: Application application_1508730362330_0002 failed 2 times due to AM Container for appattempt_1508730362330_0002_000002 exited with  exitCode: -1000
For more detailed output, check application tracking page: click on links to logs of each attempt.
Diagnostics: org.apache.hadoop.security.authentication.client.AuthenticationException: org.apache.hadoop.security.token.SecretManager$InvalidToken: token (kms-dt owner=xiao, renewer=yarn, realUser=, issueDate=1508730603474, maxDate=1509335403474, sequenceNumber=7, masterKeyId=69) is expired, current time: 2017-10-22 20:50:12,166-0700 expected renewal time: 2017-10-22 20:50:05,518-0700
….
Caused by: org.apache.hadoop.security.authentication.client.AuthenticationException: org.apache.hadoop.security.token.SecretManager$InvalidToken: token (kms-dt owner=xiao, renewer=yarn, realUser=, issueDate=1508730603474, maxDate=1509335403474, sequenceNumber=7, masterKeyId=69) is expired, current time: 2017-10-22 20:50:12,166-0700 expected renewal time: 2017-10-22 20:50:05,518-0700
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

at org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:535)

Token can’t be found in cache

Sometimes, the application fails with AuthenticationException, with an InvalidToken exception wrapped inside. The exception message indicates that “token can’t be found in cache”. Guess why this could happen, and what’s the difference with the “token is expired” error?

….
17/10/22 20:55:47 INFO mapreduce.Job: Job job_1508730362330_0003 failed with state FAILED due to: Application application_1508730362330_0003 failed 2 times due to AM Container for appattempt_1508730362330_0003_000002 exited with  exitCode: -1000
For more detailed output, check application tracking page: click on links to logs of each attempt.
Diagnostics: org.apache.hadoop.security.authentication.client.AuthenticationException: org.apache.hadoop.security.token.SecretManager$InvalidToken: token (kms-dt owner=xiao, renewer=yarn, realUser=, issueDate=1508730937041, maxDate=1509335737041, sequenceNumber=8, masterKeyId=73) can’t be found in cache
java.io.IOException: org.apache.hadoop.security.authentication.client.AuthenticationException: org.apache.hadoop.security.token.SecretManager$InvalidToken: token (kms-dt owner=xiao, renewer=yarn, realUser=, issueDate=1508730937041, maxDate=1509335737041, sequenceNumber=8, masterKeyId=73) can’t be found in cache
at org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:294)
at org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:528)
at org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:1448)

at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:784)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:367)
at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:265)
at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:61)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:359)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:357)

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.hadoop.security.authentication.client.AuthenticationException: org.apache.hadoop.security.token.SecretManager$InvalidToken: token (kms-dt owner=xiao, renewer=yarn, realUser=, issueDate=1508730937041, maxDate=1509335737041, sequenceNumber=8, masterKeyId=73) can’t be found in cache

at org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:535)

Explanation:

The 2 errors above share the same cause: the token that was used to authenticate has expired, and can no longer be used to authenticate.

The first message was able to tell explicitly that the token has expired, because that token is still stored in the server. Therefore when the server verifies the token, the verification on expiration time fails, throwing the “token is expired” exception.

Now can you guess when the second error could happen? Remember in the ‘Delegation Tokens at the Server-side’ section, we explained the server has a background thread to remove expired tokens. So if the token is being used for authentication after the server’s background thread has removed it, the verification logic on the server cannot find this token. This results in an exception being thrown, saying “token can’t be found”.

The graph below shows the sequence of these events.
Figure 4: Life Cycle of Delegation Token
              Figure 4: Life Cycle of Delegation Token

Note that when a token is explicitly canceled, it will be immediately removed from the store. So for canceled tokens, the error will always be “token can’t be found”.

To further debug these errors, grepping the client logs as well as the server logs for the specific token sequence number (“sequenceNumber=7” or “sequenceNumber=8” in the above examples) is necessary. You should be able to see events related to token creations, renewals (if any), cancelations (if any) in the server logs.

Long-running Applications

At this point, you know all the basics about Hadoop Delegation Tokens. But there is one missing piece beyond it: we know that Delegations Tokens cannot be renewed beyond their max lifetimes, so what happens to applications that do need to run longer than the max lifetime?

The bad news is that Hadoop does not have a unified way for all applications to do this, so there does not exist a magic configuration which any application can turn on to let it ‘just work’. But this is still possible. Good news for Spark-submitted applications is, Spark has implemented these magical parameters. Spark get the Delegation Tokens and uses them for authentication, similar to what we described in the earlier sections. However, Spark does not renew the tokens, and instead just gets a new token when it’s about to expire. This allows the application to run forever. The relevant code is .  Note that this requires giving a Kerberos keytab to your Spark application.

But what if you are implementing a long-running service application, and want the tokens handled explicitly by the application? This would involve two parts: renew the tokens until the max lifetime; handle the token replacement after max lifetime.

Note that this is not a usual practice, and is only recommended if you’re implementing a service.

Implementing a Token Renewer

Let’s first study how token renewal should be done. The best way is to study YARN RM’sDelegationTokenRenewer code.

Some key points from that class are:
        1. 
It is a single class to manage all the tokens. Internally it has a thread pool to renew tokens, and another thread to cancel tokens. Renew happens at 90% of the expiration time. Cancel has a small delay (30 seconds) to prevent races.

       2.
Expiration time for each token is managed separately. The token’s expiration time is programmatically retrieved by calling the token’s renew() API. The return value of that API is the expiration time.  

    1. dttr.expirationDate =
    2.           UserGroupInformation.getLoginUser().doAs(
    3.             new PrivilegedExceptionAction<Long>() {
    4.               @Override
    5.               public Long run() throws Exception {
    6.                 return dttr.token.renew(dttr.conf);
    7.               }
    8.             });
   This is another reason why YARN RM renews a token immediately after receiving it – to know when it will expire.


   3. The max lifetime of each token is retrieved by decoding the token’s identifier, and calling its getMaxDate() API. Other fields in the identifier can also be obtained by calling similar APIs.

  1. if (token.getKind().equals(HDFS_DELEGATION_KIND)) {
  2.         try {
  3.           AbstractDelegationTokenIdentifier identifier =
  4.               (AbstractDelegationTokenIdentifier) token.decodeIdentifier();
  5.           maxDate = identifier.getMaxDate();
  6.         } catch (IOException e) {
  7.           throw new YarnRuntimeException(e);
  8.         }
  9.  }

   4. Because of 2 and 3, no configuration needs to be read to determine the renew interval and  max lifetime. The server’s configuration may change from time to time. The client should not depend  on it, and do not have a way to know it.


Handling Tokens after Max Lifetime

The token renewer will only renew the token until its max lifetime. After max lifetime, the job will fail. If your application is long-running, you should consider either utilizing the mechanisms described in YARN documentation about long-lived services, or add logic to your own delegation token renewer class, to retrieve new delegation tokens when the existing ones are about to reach max lifetime.

Other Ways Tokens Are Used

Congratulations! You have now finished reading the concepts and details about Delegation Tokens. There are a few related aspects not covered by this blog post. Below is a brief introduction of them.

Delegation Tokens in other services: Services such as Apache Oozie, Apache Hive, and Apache Hadoop’s YARN RM also utilize Delegation Tokens for authentication.

Block Access Token: HDFS clients access a file by first contacting the NameNode, to get the block locations of a specific file, then access the blocks directly on the DataNode. File permission checking takes place in the NameNode. But for the subsequent data block accesses on DataNodes, authorization checks are required as well. Block Access Tokens are used for this purpose. They are issued by HDFS NameNode to the client, and then passed to DataNode by the client. Block Access Token has a short lifetime (10 hours by default) and can not be renewed. If Block Access Token expires, the client has to request a new one.

Authentication Token: We have covered exclusively Delegation Tokens. Hadoop also has the concept of an Authentication Token, which is designed to be an even cheaper and more scalable authentication mechanism. It acts like a cookie between the server and the client. Authentication Tokens are granted by the server, cannot be renewed or used to impersonate others. And unlike Delegation Tokens, don’t need to be individually stored by the server.  You should not need to explicitly code against Authentication Token.

Conclusion

Delegation Tokens play an important role in the Hadoop ecosystem. You should now understand the purpose of Delegation Tokens, how they are used, and why they are used this way. This knowledge is essential when writing and debugging applications.

Appendices

Appendix A. Configurations at the server-side.

Below is the table of configurations related to Delegation Tokens. Please see Delegation Tokens at the Server-side for explanations of these properties.

 Property  Configuration Name
 in HDFS
 Default Value
 in HDFS
 Configuration Name
 in KMS
 Default Value 
 in KMS
 Renew Interval  dfs.namenode.delegation.token.renew-
 interval
 86400000
 (1 day)
 hadoop.kms.authentication.delegation-token.renew-
 interval.sec
 86400
 (1 day)
 Max Lifetime  dfs.namenode.delegation.token.max-
 lifetime
 604800000
 (7 days)
 hadoop.kms.authentication.delegation-token.max-
 lifetime.sec
 604800
 (7 days)
 Interval of background 
 removal of expired Tokens
 Not configurable  3600000
 (1 hour)
 hadoop.kms.authentication.delegation-token.removal-
 scan-interval.sec
 3600
 (1 hour)
 Master Key Rolling Interval  dfs.namenode.delegation.key.update-
 interval
 86400000
 (1 day)
 hadoop.kms.authentication.delegation-token.update-
 interval.sec
 86400
 (1 day)
Table 4: Configuration Properties and Default Values for HDFS and KMS

Appendix B. Example Code of Authenticating with Delegation Tokens.

One concept to understand before looking at the code below is the UserGroupInformation (UGI) class. UGI is Hadoop’s public API about coding against authentication. It is used in the code examples below, and also appeared in some of the exception stack traces earlier.

GetFileStatus is used as an example of using UGIs to access HDFS. For details, see .

  1. UserGroupInformation tokenUGI = UserGroupInformation.createRemoteUser("user_name");
  2. UserGroupInformation kerberosUGI = UserGroupInformation.loginUserFromKeytabAndReturnUGI("principal_in_keytab", "path_to_keytab_file");
  3. Credentials creds = new Credentials();
  4. kerberosUGI.doAs((PrivilegedExceptionAction<Void>) () -> {
  5.   Configuration conf = new Configuration();
  6.   FileSystem fs = FileSystem.get(conf);
  7.   fs.getFileStatus(filePath); // ← kerberosUGI can authenticate via Kerberos

  8.   // get delegation tokens, add them to the provided credentials. set renewer to ‘yarn’
  9.   Token<?>[] newTokens = fs.addDelegationTokens("yarn", creds);
  10.   // Add all the credentials to the UGI object
  11.   tokenUGI.addCredentials(creds);

  12.   // Alternatively, you can add the tokens to UGI one by one.
  13.   // This is functionally the same as the above, but you have the option to log each token acquired.
  14.   for (Token<?> token : newTokens) {
  15.     tokenUGI.addToken(token);
  16.   }
  17.   return null;
  18. });

Note that the addDelegationTokens RPC call is invoked with Kerberos authentication. Otherwise, it would result in an IOException thrown, with text saying “Delegation Token can be issued only with kerberos or web authentication”.
        Now, we can use the acquired delegation token to access HDFS.

       
tokenUGI.doAs((PrivilegedExceptionAction<Void>) () -> {
       
 Configuration conf = new Configuration();
       
 FileSystem fs = FileSystem.get(conf);
        
fs.getFileStatus(filePath); // ← tokenUGI can authenticate via Delegation Token
       
 return null;
      
});

      
Xiao Chen and Yongjun Zhang are Software Engineers at Cloudera, and Hadoop committers/PMC members.

阅读(7759) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~