1.启动hadoop, start-all.sh 启动hadoop2. copy local example data to hdfs bin/hadoop jar contrib/streaming/hadoop-*streaming*.jar -file /home/hduser/reducer.py -reducer /home/hduser/reducer.py bin/hadoop dfs -cat /user/hduser/gutenberg-output/part-00000【阅读全文】
Hadoop运行mapreduce实例时,抛出错误ava.io.IOException: All datanodes xxx.xxx.xxx.xxx:xxx are bad. Aborting…at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2158)at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1400(DFSClient.java:1735)at org.apach...【阅读全文】