Chinaunix首页 | 论坛 | 博客
  • 博客访问: 4135
  • 博文数量: 3
  • 博客积分: 0
  • 博客等级: 民兵
  • 技术积分: 10
  • 用 户 组: 普通用户
  • 注册时间: 2016-12-03 11:06
文章分类

全部博文(3)

文章存档

2017年(1)

2016年(2)

我的朋友
最近访客

分类: HADOOP

2017-02-11 00:56:32

hadoop 和 hdfs 命令用法
hadoop:
Usage: hadoop [--config confdir] [COMMAND | CLASSNAME]
  CLASSNAME            run the class named CLASSNAME
 or
  where COMMAND is one of:
  fs                   run a generic filesystem user client
  version              print the version
  jar            run a jar file
                       note: please use "yarn jar" to launch
                             YARN applications, not this command.
  checknative [-a|-h]  check native hadoop and compression libraries availability
  distcp copy file or directories recursively
  archive -archiveName NAME -p * create a hadoop archive
  classpath            prints the class path needed to get the
  credential           interact with credential providers
                       Hadoop jar and the required libraries
  daemonlog            get/set the log level for each daemon
  trace                view and modify Hadoop tracing settings
hdfs:
Usage: hdfs [--config confdir] [--loglevel loglevel] COMMAND
       where COMMAND is one of:
  dfs                  run a filesystem command on the file systems supported in Hadoop.
  classpath            prints the classpath
  namenode -format     format the DFS filesystem
  secondarynamenode    run the DFS secondary namenode
  namenode             run the DFS namenode
  journalnode          run the DFS journalnode
  zkfc                 run the ZK Failover Controller daemon
  datanode             run a DFS datanode
  dfsadmin             run a DFS admin client
  haadmin              run a DFS HA admin client
  fsck                 run a DFS filesystem checking utility
  balancer             run a cluster balancing utility
  jmxget               get JMX exported values from NameNode or DataNode.
  mover                run a utility to move block replicas across
                       storage types
  oiv                  apply the offline fsimage viewer to an fsimage
  oiv_legacy           apply the offline fsimage viewer to an legacy fsimage
  oev                  apply the offline edits viewer to an edits file
  fetchdt              fetch a delegation token from the NameNode
  getconf              get config values from configuration
  groups               get the groups which users belong to
  snapshotDiff         diff two snapshots of a directory or diff the
                       current directory contents with a snapshot
  lsSnapshottableDir   list all snapshottable dirs owned by the current user
                                                Use -help to see options
  portmap              run a portmap service
  nfs3                 run an NFS version 3 gateway
  cacheadmin           configure the HDFS cache
  crypto               configure HDFS encryption zones
  storagepolicies      list/get/set block storage policies
  version              print the version


常用命令合集
1.查看指定目录内容
hadoop fs -ls [dir]
hadoop fs -ls /

2.查看文件内容
hadoop fs -cat [filename]
hadoop fs -cat /data/t.txt

3.将本地文件/文件夹上传到hdfs
hadoop fs -put [file/dir] [dir]
hadoop fs -put /root/test /data

4.将hdfs上文件下载到本地目录
hadoop fs -get [file] [dir]
hadoop fs -get /data/t.txt /root

5.删除hdfs上文件/文件夹(包括子目录等)
hadoop fs -rm [file]
hadoop fs -rmr [dir]
hadoop fs -rm /data/t.txt
hadoop fs -rmr /data/test

6.在hdfs指定目录下创建新文件夹
hadoop fs -mkdir [dir]
hadoop fs -mkdir /data/input

7.在hdfs指定目录下创建一个空文件
hadoop fs -touchz [file]

hadoop fs -touchz /data/input/t.log

8.将hdfs上指定文件重命名
hadoop fs -mv [filename1] [filename2]
hadoop fs -mv /data/input/t.log /data/input/t.txt

9.列出正在运行的Job
hadoop job -list [all]
hadoop job -list

10.Kill指定的job
hadoop fs -kill [job-id]

阅读(462) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~