Chinaunix首页 | 论坛 | 博客
  • 博客访问: 2676137
  • 博文数量: 877
  • 博客积分: 0
  • 博客等级: 民兵
  • 技术积分: 5921
  • 用 户 组: 普通用户
  • 注册时间: 2013-12-05 12:25
个人简介

技术的乐趣在于分享,欢迎多多交流,多多沟通。

文章分类

全部博文(877)

文章存档

2021年(2)

2016年(20)

2015年(471)

2014年(358)

2013年(26)

分类: iOS平台

2015-09-19 23:24:56

sqlite3 bulk insert
Bulk Insert into sqlite – FMDatabase
http://blog.csdn.net/alexjames_83/article/details/7471960

Not much to share, just this:

"When you need bulk insert a good number of rows into your sqlite database using FMDatabase, executing a series of INSERT statement is sure to kill performance. Just simply use transaction will drastically optimize performance."

Add the following lines of code

[db beginTransaction];
//DO BULK INSERT HERE
[db commit];

WHY?

When all the INSERTs are put in a transaction, SQLite no longer has to close and reopen the database or invalidate its cache between each statement. It also does not have to do any fsync()s until the very end.

Here's a test: 25000 INSERTs into an indexed table

BEGIN;
CREATE TABLE t3(a INTEGER, b INTEGER, c VARCHAR(100));
CREATE INDEX i3 ON t3(c);
... 24998 lines omitted
INSERT INTO t3 VALUES(24999,88509,'eighty eight thousand five hundred nine');
INSERT INTO t3 VALUES(25000,84791,'eighty four thousand seven hundred ninety one');
COMMIT;

SQLite 2.7.6 - 1.5S

SQLite批量插入优化方法
http://blog.csdn.net/pipisky2006/article/details/6917399

SQLite的数据库本质上来讲就是一个磁盘上的文件,所以一切的数据库操作其实都会转化为对文件的操作,而频繁的文件操作将会是一个很好时的过程,会极大地影响数据库存取的速度。
例如:向数据库中插入100万条数据,在默认的情况下如果仅仅是执行 
sqlite3_exec(db, “insert into name values ‘lxkxf', ‘24'; ”, 0, 0, &zErrMsg); 
将会重复的打开关闭数据库文件100万次,所以速度当然会很慢。因此对于这种情况我们应该使用“事务”。 
具体方法如下:在执行SQL语句之前和SQL语句执行完毕之后加上 
rc = sqlite3_exec(db, "BEGIN;", 0, 0, &zErrMsg); 
//执行SQL语句 
rc = sqlite3_exec(db, "COMMIT;", 0, 0, &zErrMsg);
这样SQLite将把全部要执行的SQL语句先缓存在内存当中,然后等到COMMIT的时候一次性的写入数据库,这样数据库文件只被打开关闭了一次,效率自然大大的提高。有一组数据对比:
测试1: 1000 INSERTs 
CREATE TABLE t1(a INTEGER, b INTEGER, c VARCHAR(100)); 
INSERT INTO t1 VALUES(1,13153,'thirteen thousand one hundred fifty three'); 
INSERT INTO t1 VALUES(2,75560,'seventy five thousand five hundred sixty'); 
... 995 lines omitted 
INSERT INTO t1 VALUES(998,66289,'sixty six thousand two hundred eighty nine'); 
INSERT INTO t1 VALUES(999,24322,'twenty four thousand three hundred twenty two'); 
INSERT INTO t1 VALUES(1000,94142,'ninety four thousand one hundred forty two'); 
SQLite 2.7.6: 
13.061 
SQLite 2.7.6 (nosync): 
0.223

测试2: 使用事务 25000 INSERTs 
BEGIN; 
CREATE TABLE t2(a INTEGER, b INTEGER, c VARCHAR(100)); 
INSERT INTO t2 VALUES(1,59672,'fifty nine thousand six hundred seventy two'); 
... 24997 lines omitted 
INSERT INTO t2 VALUES(24999,89569,'eighty nine thousand five hundred sixty nine'); 
INSERT INTO t2 VALUES(25000,94666,'ninety four thousand six hundred sixty six'); 
COMMIT; 
SQLite 2.7.6: 
0.914 
SQLite 2.7.6 (nosync): 
0.757

可见使用了事务之后却是极大的提高了数据库的效率。但是我们也要注意,使用事务也是有一定的开销的,所以对于数据量很小的操作可以不必使用,以免造成而外的消耗。

android中的方法,

解决方法:

添加事务处理,把5000条插入作为一个事务

dataBase.beginTransaction();        //手动设置开始事务

//数据插入操作循环

dataBase.setTransactionSuccessful();        //设置事务处理成功,不设置会自动回滚不提交

dataBase.endTransaction();        //处理完成 




PS,在android中一般是使用ContentResolver来包装下sqlite,自带一个批量插入的方法

public final int bulkInsert (Uri url, ContentValues[] values)

Since: API Level 1

Inserts multiple rows into a table at the given URL. This function make no guarantees about the atomicity of the insertions.

Parameters
url The URL of the table to insert into.
values The initial values for the newly inserted rows. The key is the column name for the field. Passing null will create an empty row.
Returns
  • the number of newly created rows. 


阅读(1842) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~