昨天上线hbase的查询功能,测试了功能没问题,今天使用时发现不能用了,后台抛异常,异常内容如下:
-
11:01:06,255 [org.apache.hadoop.security.UserGroupInformation]-[ERROR] PriviledgedActionException as:root (auth:SIMPLE) cause:org.apache.hadoop.util.Shell$ExitCodeException: chmod: cannot access `/tmp/hadoop-root/mapred/staging/root933593746/.staging/job_local933593746_0001': No such file or directory
-
-
11:01:06,259 [com.cmcc.aoi.selfhelp.service.impl.HbaseTagTokenServiceImpl]-[ERROR]
-
org.apache.hadoop.util.Shell$ExitCodeException: chmod: cannot access `/tmp/hadoop-root/mapred/staging/root933593746/.staging/job_local933593746_0001': No such file or directory
-
-
at org.apache.hadoop.util.Shell.runCommand(Shell.java:261)
-
at org.apache.hadoop.util.Shell.run(Shell.java:188)
-
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
-
at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
-
at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
-
at org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:593)
-
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:584)
-
at org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:427)
-
at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:579)
-
at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:171)
-
at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:293)
-
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:364)
-
at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1286)
-
at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1283)
-
at java.security.AccessController.doPrivileged(Native Method)
-
at javax.security.auth.Subject.doAs(Subject.java:415)
-
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1438)
-
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1283)
-
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1304)
-
at com.cmcc.aoi.selfhelp.service.impl.HbaseTagTokenServiceImpl.simpleTagUsercount(HbaseTagTokenServiceImpl.java:571)
-
......
-
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
-
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
-
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
-
at java.lang.reflect.Method.invoke(Method.java:606)
-
at org.springframework.scheduling.support.ScheduledMethodRunnable.run(ScheduledMethodRunnable.java:64)
-
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:53)
-
at org.springframework.scheduling.concurrent.ReschedulingRunnable.run(ReschedulingRunnable.java:81)
-
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
-
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
-
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
-
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
-
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
-
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
-
at java.lang.Thread.run(Thread.java:745)
从日志看,好像是没有权限,于是将/tmp改成777。但无效。
在网上搜,有人说ext3的一个目录里最多放32000个文件,会不会是文件夹满了?于是去查了一下文件夹下的文件数:31998个文件。把里面的文件全清掉后,再运行。抛出了下面的异常:
-
11:58:20,527 [org.apache.hadoop.security.UserGroupInformation]-[ERROR] PriviledgedActionException as:root (auth:SIMPLE) cause:java.io.IOException: java.util.concurrent.ExecutionException: java.io.I
-
OException: mkdir of /tmp/hadoop-root/mapred/local/-2428263295616411587 failed
-
11:58:20,527 [com.cmcc.aoi.selfhelp.service.impl.HbaseTagTokenServiceImpl]-[ERROR]
-
java.io.IOException: java.util.concurrent.ExecutionException: java.io.IOException: mkdir of /tmp/hadoop-root/mapred/local/-2428263295616411587 failed
-
at org.apache.hadoop.mapred.LocalDistributedCacheManager.setup(LocalDistributedCacheManager.java:144)
-
at org.apache.hadoop.mapred.LocalJobRunner$Job.<init>(LocalJobRunner.java:155)
-
at org.apache.hadoop.mapred.LocalJobRunner.submitJob(LocalJobRunner.java:625)
-
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:407)
-
at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1286)
-
at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1283)
-
at java.security.AccessController.doPrivileged(Native Method)
-
at javax.security.auth.Subject.doAs(Subject.java:415)
-
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1438)
-
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1283)
-
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1304)
-
.....
-
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
-
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
-
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220)
-
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122)
-
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:501)
-
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)
-
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
-
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:950)
-
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
-
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408)
-
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1040)
-
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:607)
-
at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:314)
-
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
-
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
-
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
-
at java.lang.Thread.run(Thread.java:745)
-
Caused by: java.util.concurrent.ExecutionException: java.io.IOException: mkdir of /tmp/hadoop-root/mapred/local/-2428263295616411587 failed
-
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
-
at java.util.concurrent.FutureTask.get(FutureTask.java:188)
-
at org.apache.hadoop.mapred.LocalDistributedCacheManager.setup(LocalDistributedCacheManager.java:140)
-
... 59 more
-
Caused by: java.io.IOException: mkdir of /tmp/hadoop-root/mapred/local/-2428263295616411587 failed
-
at org.apache.hadoop.fs.FileSystem.primitiveMkdir(FileSystem.java:1042)
-
at org.apache.hadoop.fs.DelegateToFileSystem.mkdir(DelegateToFileSystem.java:150)
-
at org.apache.hadoop.fs.FilterFs.mkdir(FilterFs.java:190)
-
at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:698)
-
at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:695)
-
at org.apache.hadoop.fs.FileContext$FSLinkResolver.resolve(FileContext.java:2325)
-
at org.apache.hadoop.fs.FileContext.mkdir(FileContext.java:695)
-
at org.apache.hadoop.yarn.util.FSDownload.createDir(FSDownload.java:88)
-
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:274)
-
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:51)
-
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
-
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
-
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
-
... 1 more
这回异常变了,变成了创建文件失败,于是到
/tmp/hadoop-root/mapred/local/目录下查看,发现里面文件也满了,接着全部删除,再运行程序。
这回通过了
阅读(3734) | 评论(0) | 转发(0) |