linux 3.10
drivers/md/md.c, Multiple Device driver, RAID
drivers/md/dm.c, Device mapper
针对不同的设备类型, 把bio转发到不同设备,
https://www.ibm.com/developerworks/cn/linux/l-devmapper/
http://blog.csdn.net/sonicling/article/details/5460311
LVM的类型和定义
https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/4/html/Cluster_Logical_Volume_Manager/lv_overview.html#linear_volumes
dm初始化, 载入不同类型的target,
-
static int (*_inits[])(void) __initdata = {
-
local_init, // 本地,
-
dm_target_init, // 什么也不是, 提供框架
-
dm_linear_init, // 根据target, 修改bio->bi_sector, sector进行线性转换,
-
// 即只要加上offset, 不修改bio->bi_bdev
-
dm_stripe_init,
-
dm_io_init,
-
dm_kcopyd_init,
-
dm_interface_init,
-
}
dm.c 中只是最基本的target类型
调用dm_register_target(stripe_target), 如
-
static struct target_type stripe_target = {
-
.name = "striped",
-
.version = {1, 5, 1},
-
.module = THIS_MODULE,
-
.ctr = stripe_ctr, // 创建
-
.dtr = stripe_dtr, // 销毁
-
.map = stripe_map, // 核心, bio的映射
-
.end_io = stripe_end_io,
-
.status = stripe_status,
-
.iterate_devices = stripe_iterate_devices,
-
.io_hints = stripe_io_hints,
-
.merge = stripe_merge,
-
}
io入口,
dm_setup_md_queue(md),
为指定的md构造标准bio io queue,
之后发给这个的bio都会在内部进行再次分发
=> dm_init_request_based_queue()
构造request_queue, 依次调用
md->queue = blk_init_allocated_queue(md->queue, dm_request_fn, NULL);
dm_init_md_queue(md);
blk_queue_softirq_done(md->queue, dm_softirq_done);
blk_queue_prep_rq(md->queue, dm_prep_fn);
blk_queue_lld_busy(md->queue, dm_lld_busy);
// 注册到系统磁盘elvator
elv_register_queue(md->queue);
dm-ioctl.c,
lookup_ioctl(), 与用户态控制程序对接, 加载lvm设置,
{DM_TABLE_LOAD_CMD, 0, table_load}, =>
dm_setup_md_queue(), dm.c
dm-snap.c
dm_snapshot_init()
dm_register_target(&snapshot_target);
dm_register_target(&origin_target);
dm_register_target(&merge_target);
lvm 使用COW实现snapshot
/usr/sbin/lvcreate -L10G -n snap-root -s lvmvolume/root
命令lvcreate 使用参数-s创建snapshot, 从当前点开始, 原始lv被称为origin,
并新建snap, 即origin_target和snapshot_target
origin_target: do nothing
snapshot_target:
lvm snap实现基于COW, origin与普通读写没有任何区别,
snap由dm_kcopyd_client(dm-kcopyd.c)实现COW,
即将改变的block原始数据写到snap进行备份(原始的做法)
snapshot_ctr() =>
dm_get_device(), 获取origin和cow设备
dm-exception-store.c,
dm_exception_store_create(), 可以为snapshot的metadata额外指定存储空间,
struct dm_exception_store_type type = _get_exception_store_type(),
两种, p: persistent, 即重启之后snap依然可用, 记录进行执行过COW的区域,
当前内核中的lvm snap实现完全不考虑兼容性, 不同磁盘上的snap不保证可互用,
lvcreate应从全新的COW设备上执行snap.
t: transient, 除了框架, 啥也没有
dm_kcopyd_client_create(), dm-kcopyd.c
register_snapshot(),
将新建dm_snapshot 按chunk_size顺序链接到list origin->snapshots
merge, 参考
https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Logical_Volume_Manager_Administration/snapshot_merge.html
将COW copy到origin
snapshot_merge_resume() =>
snapshot_resume()
start_merge() =>
dm_kcopyd_copy()
阅读(2678) | 评论(0) | 转发(0) |