Chinaunix首页 | 论坛 | 博客
  • 博客访问: 2880656
  • 博文数量: 599
  • 博客积分: 16398
  • 博客等级: 上将
  • 技术积分: 6875
  • 用 户 组: 普通用户
  • 注册时间: 2009-11-30 12:04
个人简介

WINDOWS下的程序员出身,偶尔也写一些linux平台下小程序, 后转行数据库行业,专注于ORACLE和DB2的运维和优化。 同时也是ios移动开发者。欢迎志同道合的朋友一起研究技术。 数据库技术交流群:58308065,23618606

文章分类

全部博文(599)

文章存档

2014年(12)

2013年(56)

2012年(199)

2011年(105)

2010年(128)

2009年(99)

分类: HADOOP

2013-04-10 18:31:43




在将有定界符文本文件导入HBASE库中,需要将后面的定界符去掉,否则将导入失败。

如下所示:


[hadoop@hadoop1 bin]$ cat /tmp/emp.txt
1,A,201304,
2,B,201305,
3,C,201306,
4,D,201307,

这个文件后面多了一个逗号。

[hadoop@hadoop1 bin]$ hadoop fs -put /tmp/emp.txt /emp.txt



hbase(main):017:0> describe 't'
DESCRIPTION                                                                                      ENABLED                                             
 {NAME => 't', FAMILIES => [{NAME => 'cf', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', true                                                
  REPLICATION_SCOPE => '0', VERSIONS => '3', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL =>                                                      
 '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE                                                     
 _ON_DISK => 'true', BLOCKCACHE => 'true'}]}                                                                                                         
1 row(s) in 0.1410 seconds



表T只有一个COLUMN FAMILAY CF.



[hadoop@hadoop1 bin]$ hadoop jar /home/hadoop/hbase-0.94.6/hbase-0.94.6.jar importtsv -Dimporttsv.columns=HBASE_ROW_KEY,cf:c1,cf:c2 -Dimporttsv.separator=, t /emp.txt

............

13/04/10 08:06:24 INFO mapred.JobClient: Running job: job_201304100706_0008
13/04/10 08:06:25 INFO mapred.JobClient:  map 0% reduce 0%
13/04/10 08:07:24 INFO mapred.JobClient:  map 100% reduce 0%
13/04/10 08:07:29 INFO mapred.JobClient: Job complete: job_201304100706_0008
13/04/10 08:07:29 INFO mapred.JobClient: Counters: 19
13/04/10 08:07:29 INFO mapred.JobClient:   Job Counters 
13/04/10 08:07:29 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=37179
13/04/10 08:07:29 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
13/04/10 08:07:29 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
13/04/10 08:07:29 INFO mapred.JobClient:     Rack-local map tasks=1
13/04/10 08:07:29 INFO mapred.JobClient:     Launched map tasks=1
13/04/10 08:07:29 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
13/04/10 08:07:29 INFO mapred.JobClient:   ImportTsv
13/04/10 08:07:29 INFO mapred.JobClient:     Bad Lines=4
13/04/10 08:07:29 INFO mapred.JobClient:   File Output Format Counters 
13/04/10 08:07:29 INFO mapred.JobClient:     Bytes Written=0
13/04/10 08:07:29 INFO mapred.JobClient:   FileSystemCounters
13/04/10 08:07:29 INFO mapred.JobClient:     HDFS_BYTES_READ=145
13/04/10 08:07:29 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=33535
13/04/10 08:07:29 INFO mapred.JobClient:   File Input Format Counters 
13/04/10 08:07:29 INFO mapred.JobClient:     Bytes Read=48
13/04/10 08:07:29 INFO mapred.JobClient:   Map-Reduce Framework
13/04/10 08:07:29 INFO mapred.JobClient:     Map input records=4
13/04/10 08:07:29 INFO mapred.JobClient:     Physical memory (bytes) snapshot=37830656
13/04/10 08:07:29 INFO mapred.JobClient:     Spilled Records=0
13/04/10 08:07:29 INFO mapred.JobClient:     CPU time spent (ms)=200
13/04/10 08:07:29 INFO mapred.JobClient:     Total committed heap usage (bytes)=8155136
13/04/10 08:07:29 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=345518080
13/04/10 08:07:29 INFO mapred.JobClient:     Map output records=0
13/04/10 08:07:29 INFO mapred.JobClient:     SPLIT_RAW_BYTES=97


可以看到这4行都被标记为Bad Lines而抛弃了。


[hadoop@hadoop1 bin]$ cat /tmp/emp.txt
1,A,201304
2,B,201305
3,C,201306
4,D,201307

[hadoop@hadoop1 bin]$ hadoop fs -rmr /emp.txt

Deleted hdfs://192.168.0.88:9000/emp.txt



[hadoop@hadoop1 bin]$ hadoop fs -put /tmp/emp.txt /emp.txt





[hadoop@hadoop1 bin]$ hadoop jar /home/hadoop/hbase-0.94.6/hbase-0.94.6.jar importtsv -Dimporttsv.columns=HBASE_ROW_KEY,cf:c1,cf:c2 -Dimporttsv.separator=, t /emp.txt                             

13/04/10 07:54:40 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
13/04/10 07:54:40 INFO zookeeper.ZooKeeper: Client environment:host.name=hadoop1
13/04/10 07:54:40 INFO zookeeper.ZooKeeper: Client environment:java.version=1.7.0
13/04/10 07:54:40 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
13/04/10 07:54:40 INFO zookeeper.ZooKeeper: Client environment:java.home=/java/jdk1.7.0/jre
13/04/10 07:54:40 INFO zookeeper.ZooKeeper: Client environment:java.class.path=/home/hadoop/hadoop-1.0.4/conf:/java/jdk1.7.0/lib/tools.jar:/home/hadoop/hadoop-1.0.4/libexec/..:/home/hadoop/hadoop-1.0.4/libexec/../hadoop-core-1.0.4.jar:/home/hadoop/hadoop-1.0.4/libexec/../lib/asm-3.2.jar:/home/hadoop/hadoop-1.0.4/libexec/../lib/aspectjrt-1.6.5.jar:/home/hadoop/hadoop-1.0.4/libexec/../lib/aspectjtools-1.6.5.jar:/home/hadoop/hadoop-1.0.4/libexec/../lib/commons-beanutils-1.7.0.jar:/home/hadoop/hadoop-1.0.4/libexec/../lib/commons-beanutils-core-1.8.0.jar:/home/hadoop/hadoop-1.0.4/libexec/../lib/commons-cli-1.2.jar:/home/hadoop/hadoop-1.0.4/libexec/../lib/commons-codec-1.4.jar:/home/hadoop/hadoop-1.0.4/libexec/../lib/commons-collections-3.2.1.jar:/home/hadoop/hadoop-1.0.4/libexec/../lib/commons-configuration-1.6.jar:/home/hadoop/hadoop-1.0.4/libexec/../lib/commons-daemon-1.0.1.jar:/home/hadoop/hadoop-1.0.4/libexec/../lib/commons-digester-1.8.jar:/home/hadoop/hadoop-1.0.4/libexec/../lib/commons-el-1.0.jar:/home/hadoop/hadoop-1.0.4/libexec/../lib/commons-httpclient-3.0.1.jar:/home/hadoop/hadoop-1.0.4/libexec/../lib/commons-io-2.1.jar:/home/hadoop/hadoop-1.0.4/libexec/../lib/commons-lang-2.4.jar:/home/hadoop/hadoop-1.0.4/libexec/../lib/commons-logging-1.1.1.jar:/home/hadoop/hadoop-1.0.4/libexec/../lib/commons-logging-api-1.0.4.jar:/home/hadoop/hadoop-1.0.4/libexec/../lib/commons-math-2.1.jar:/home/hadoop/hadoop-1.0.4/libexec/../lib/commons-net-1.4.1.jar:/home/hadoop/hadoop-1.0.4/libexec/../lib/core-3.1.1.jar:/home/hadoop/hadoop-1.0.4/libexec/../lib/guava-11.0.2.jar:/home/hadoop/hadoop-1.0.4/libexec/../lib/hadoop-capacity-scheduler-1.0.4.jar:/home/hadoop/hadoop-1.0.4/libexec/../lib/hadoop-fairscheduler-1.0.4.jar:/home/hadoop/hadoop-1.0.4/libexec/../lib/hadoop-thriftfs-1.0.4.jar:/home/hadoop/hadoop-1.0.4/libexec/../lib/hbase-0.94.6-tests.jar:/home/hadoop/hadoop-1.0.4/libexec/../lib/hbase-0.94.6.jar:/home/hadoop/hadoop-1.0.4/libexec/../lib/hsqldb-1.8.0.10.jar:/home/hadoop/hadoop-1.0.4/libexec/../lib/jackson-core-asl-1.8.8.jar:/home/hadoop/hadoop-1.0.4/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/home/hadoop/hadoop-1.0.4/libexec/../lib/jasper-compiler-5.5.12.jar:/home/hadoop/hadoop-1.0.4/libexec/../lib/jasper-runtime-5.5.12.jar:/home/hadoop/hadoop-1.0.4/libexec/../lib/jdeb-0.8.jar:/home/hadoop/hadoop-1.0.4/libexec/../lib/jersey-core-1.8.jar:/home/hadoop/hadoop-1.0.4/libexec/../lib/jersey-json-1.8.jar:/home/hadoop/hadoop-1.0.4/libexec/../lib/jersey-server-1.8.jar:/home/hadoop/hadoop-1.0.4/libexec/../lib/jets3t-0.6.1.jar:/home/hadoop/hadoop-1.0.4/libexec/../lib/jetty-6.1.26.jar:/home/hadoop/hadoop-1.0.4/libexec/../lib/jetty-util-6.1.26.jar:/home/hadoop/hadoop-1.0.4/libexec/../lib/jsch-0.1.42.jar:/home/hadoop/hadoop-1.0.4/libexec/../lib/junit-4.5.jar:/home/hadoop/hadoop-1.0.4/libexec/../lib/kfs-0.2.2.jar:/home/hadoop/hadoop-1.0.4/libexec/../lib/log4j-1.2.15.jar:/home/hadoop/hadoop-1.0.4/libexec/../lib/mockito-all-1.8.5.jar:/home/hadoop/hadoop-1.0.4/libexec/../lib/oro-2.0.8.jar:/home/hadoop/hadoop-1.0.4/libexec/../lib/protobuf-java-2.4.0a.jar:/home/hadoop/hadoop-1.0.4/libexec/../lib/servlet-api-2.5-20081211.jar:/home/hadoop/hadoop-1.0.4/libexec/../lib/slf4j-api-1.4.3.jar:/home/hadoop/hadoop-1.0.4/libexec/../lib/slf4j-log4j12-1.4.3.jar:/home/hadoop/hadoop-1.0.4/libexec/../lib/xmlenc-0.52.jar:/home/hadoop/hadoop-1.0.4/libexec/../lib/zookeeper-3.4.3.jar:/home/hadoop/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-2.1.jar:/home/hadoop/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-api-2.1.jar
13/04/10 07:54:40 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/home/hadoop/hadoop-1.0.4/libexec/../lib/native/Linux-i386-32
13/04/10 07:54:40 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
13/04/10 07:54:40 INFO zookeeper.ZooKeeper: Client environment:java.compiler=
13/04/10 07:54:40 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
13/04/10 07:54:40 INFO zookeeper.ZooKeeper: Client environment:os.arch=i386
13/04/10 07:54:40 INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.18-92.el5xen
13/04/10 07:54:40 INFO zookeeper.ZooKeeper: Client environment:user.name=hadoop
13/04/10 07:54:40 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/hadoop
13/04/10 07:54:40 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/hadoop/sqoop-1.4.3/bin
13/04/10 07:54:40 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=192.168.0.90:2181 sessionTimeout=180000 watcher=hconnection
13/04/10 07:54:40 INFO zookeeper.ClientCnxn: Opening socket connection to server /192.168.0.90:2181
13/04/10 07:54:40 INFO client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
13/04/10 07:54:40 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 12549@hadoop1
13/04/10 07:54:40 INFO zookeeper.ClientCnxn: Socket connection established to hadoop3/192.168.0.90:2181, initiating session
13/04/10 07:54:46 INFO zookeeper.ClientCnxn: Session establishment complete on server hadoop3/192.168.0.90:2181, sessionid = 0x13df12619940011, negotiated timeout = 180000
13/04/10 07:54:56 INFO mapreduce.TableOutputFormat: Created table instance for t
13/04/10 07:54:56 INFO input.FileInputFormat: Total input paths to process : 1
13/04/10 07:54:56 INFO util.NativeCodeLoader: Loaded the native-hadoop library
13/04/10 07:54:56 WARN snappy.LoadSnappy: Snappy native library not loaded
13/04/10 07:54:57 INFO mapred.JobClient: Running job: job_201304100706_0007
13/04/10 07:54:59 INFO mapred.JobClient:  map 0% reduce 0%
13/04/10 07:57:29 INFO mapred.JobClient:  map 100% reduce 0%
13/04/10 07:57:37 INFO mapred.JobClient: Job complete: job_201304100706_0007
13/04/10 07:57:37 INFO mapred.JobClient: Counters: 19
13/04/10 07:57:37 INFO mapred.JobClient:   Job Counters 
13/04/10 07:57:37 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=125785
13/04/10 07:57:37 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
13/04/10 07:57:37 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
13/04/10 07:57:37 INFO mapred.JobClient:     Rack-local map tasks=1
13/04/10 07:57:37 INFO mapred.JobClient:     Launched map tasks=1
13/04/10 07:57:37 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
13/04/10 07:57:37 INFO mapred.JobClient:   ImportTsv
13/04/10 07:57:37 INFO mapred.JobClient:     Bad Lines=0
13/04/10 07:57:37 INFO mapred.JobClient:   File Output Format Counters 
13/04/10 07:57:37 INFO mapred.JobClient:     Bytes Written=0
13/04/10 07:57:37 INFO mapred.JobClient:   FileSystemCounters
13/04/10 07:57:37 INFO mapred.JobClient:     HDFS_BYTES_READ=141
13/04/10 07:57:37 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=33537
13/04/10 07:57:37 INFO mapred.JobClient:   File Input Format Counters 
13/04/10 07:57:37 INFO mapred.JobClient:     Bytes Read=44
13/04/10 07:57:37 INFO mapred.JobClient:   Map-Reduce Framework
13/04/10 07:57:37 INFO mapred.JobClient:     Map input records=4
13/04/10 07:57:37 INFO mapred.JobClient:     Physical memory (bytes) snapshot=37867520
13/04/10 07:57:37 INFO mapred.JobClient:     Spilled Records=0
13/04/10 07:57:37 INFO mapred.JobClient:     CPU time spent (ms)=170
13/04/10 07:57:37 INFO mapred.JobClient:     Total committed heap usage (bytes)=7950336
13/04/10 07:57:37 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=345387008
13/04/10 07:57:37 INFO mapred.JobClient:     Map output records=4
13/04/10 07:57:37 INFO mapred.JobClient:     SPLIT_RAW_BYTES=97



hbase(main):016:0> scan 't'
ROW                                    COLUMN+CELL                                                                                                   
 1                                     column=cf:c1, timestamp=1365551680259, value=A                                                                
 1                                     column=cf:c2, timestamp=1365551680259, value=201304                                                           
 2                                     column=cf:c1, timestamp=1365551680259, value=B                                                                
 2                                     column=cf:c2, timestamp=1365551680259, value=201305                                                           
 3                                     column=cf:c1, timestamp=1365551680259, value=C                                                                
 3                                     column=cf:c2, timestamp=1365551680259, value=201306                                                           
 4                                     column=cf:c1, timestamp=1365551680259, value=D                                                                
 4                                     column=cf:c2, timestamp=1365551680259, value=201307                                                           
4 row(s) in 0.5480 seconds

阅读(6101) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~