一、在mongodb中导出 需要的数据成.csv的格式
./bin/mongoexport -h --port -d (dbname) -c (collection名) --csv -f (字段名,多个用”,“号分割) -o 输出文件名
例如:
./bin/mongoexport -h 127.0.0.1 --port 21000 -d local -c startup_log --csv -f _id,hostname,host106,startTime,startTimeLocal -o /data/backup/201509/local.csv
split -l 10000000 local.csv (文件过大,每10000000分割一个文件)
sed -i '1d' xaa 删除列明
二、将csv文件导入到mysql中
mysql --local-infile cfq //登陆到cfq库
>crate table startup_log (_id int primary key auto_increment,hostname varchar(100) not null,host106 int not null, startTime datetinme notrtTimeLocal null,startTimeLocal datetime not null); //建个有哪些字段的表
>LOAD DATA LOCAL INFILE /data/backup/201509/xaa' //将xaa中的内容导入到startup_log
IGNORE INTO TABLE startup_log
FIELDS TERMINATED BY ','
OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n'
(_id,
hostname,
host106,
startTime,
startTimeLocal );
Query OK, 3 rows affected, 12 warnings (0.05 sec)
Records: 3 Deleted: 0 Skipped: 0 Warnings: 12
查看后没问题,生成sql,将剩下的数据导入mysql
#while read file ;do
cat <>source.sql
LOAD DATA LOCAL INFILE '/data/backup/201509/${file}'
IGNORE INTO TABLE startup_log
FIELDS TERMINATED BY ','
OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n'
(_id,
hostname,
host106,
startTime,
startTimeLocal
);
EEE
done <<< "`ls x* |grep -v xaa`"
screen -S loaddata
# mysql --local-infile -u -p -h127.0.0.1 -P3388 cfq -v -v -v 登陆到mysql cfq库
source /data/backup/201509/source.sql
阅读(1585) | 评论(0) | 转发(0) |