下面是关于动态损耗均衡和静态损耗均衡的一些描述,英语比较差,就不翻译了,免得翻译的别人看不懂。
Dynamic wear leveling
When applying the dynamic wear leveling ,new data is programmed to the free blocks(among blocks used to store user data)that have had the fewest WRITE/ERASE cycles
Static wear leveling
With static wear leveling ,the content of blocks storing data(such as code) is coped to another block so that the original block can be used for data that is changed more frequently
Static wear leveling is triggered when the difference between the maximum threshold and the minimum number of WRITE/ERASE cycles per block reaches a specific threshold .with this particular technique .the mean age of physical NAND blocks is maintained constant.
YAFFS和UBIFS到现在出现的两种针对NAND的文件系统。其中YAFFS则不能准确的评估每一个物理块的损耗情况,同时它也没有采用静态均衡损耗进一步的降低整体的损耗。
- static int yaffs_FindBlockForAllocation(yaffs_Device * dev)
- {
- int i;
- /* Find an empty block. */
- for (i = dev->internalStartBlock; i <= dev->internalEndBlock; i++) {
- dev->allocationBlockFinder++;
- if (dev->allocationBlockFinder < dev->internalStartBlock
- || dev->allocationBlockFinder > dev->internalEndBlock) {
- dev->allocationBlockFinder = dev->internalStartBlock;
- }
- bi = yaffs_GetBlockInfo(dev, dev->allocationBlockFinder);
- if (bi->blockState == YAFFS_BLOCK_STATE_EMPTY) {
- bi->blockState = YAFFS_BLOCK_STATE_ALLOCATING;
- dev->sequenceNumber++;
- bi->sequenceNumber = dev->sequenceNumber;
- dev->nErasedBlocks--;
- return dev->allocationBlockFinder;
- }
- }
- return -1;
- }
从上面这段代码可以看出,每次都是从dev->allocationBlockFinder 状态为YAFFS_BLOCK_STATE_EMPTY的块,dev->allocationBlockFinder是随着查找变化的,所以每个空闲块使用的几率是一样的。但是最根本的特点是YAFFS在寻找空闲块的时并没有严格的损耗依据,也就是说YAFFS的损耗策略是模糊的,带有一定的随机性。
与该方法较类似的是一种被称为链表管理的策略,采用与FAT表类似的手段将系统中空闲的块组织为链表(如下图所示),每次将新擦除的块擦入到链表的尾部,而每次分配空闲块时从链表头部取。相对于YAFFS,链表管理更加有优势,因为其时间复杂度为0(1)。
与之类似的则是jffs2文件系统,其中采用clean_list和dirty_list来管理空闲块和脏块。
- static int jffs2_find_nextblock(struct jffs2_sb_info *c)
- {
- struct list_head *next;
- /* Take the next block off the 'free' list */
- next = c->free_list.next;
- list_del(next);
- c->nextblock = list_entry(next, struct jffs2_eraseblock, list);
- c->nr_free_blocks--;
- jffs2_sum_reset_collected(c->summary); /* reset collected summary */
- #ifdef CONFIG_JFFS2_FS_WRITEBUFFER
- /* adjust write buffer offset, else we get a non contiguous write bug */
- if (!(c->wbuf_ofs % c->sector_size) && !c->wbuf_len)
- c->wbuf_ofs = 0xffffffff;
- #endif
- D1(printk(KERN_DEBUG "jffs2_find_nextblock(): new nextblock = 0x%08x\n", c->nextblock->offset));
- return 0;
- }
由于代码太长,所以删除了其中if (list_empty(&c->free_list))部分,所以从代码中可以看出jffs2采用的仍然是一种类似的链接表管理方法。即没有准确的评估,随机分配。
下面看一下UBIFS中如何获得空闲块。
- int ubi_wl_get_peb(struct ubi_device *ubi, int dtype)
- {
- int err, medium_ec;
- struct ubi_wl_entry *e, *first, *last;
- ubi_assert(dtype == UBI_LONGTERM || dtype == UBI_SHORTTERM ||
- dtype == UBI_UNKNOWN);
- /*此处省略若干代码*/
- switch (dtype) {
- case UBI_LONGTERM:
- /*
- * For long term data we pick a physical eraseblock with high
- * erase counter. But the highest erase counter we can pick is
- * bounded by the the lowest erase counter plus
- * %WL_FREE_MAX_DIFF.
- */
- e = find_wl_entry(&ubi->free, WL_FREE_MAX_DIFF);
- break;
- case UBI_UNKNOWN:
- /*
- * For unknown data we pick a physical eraseblock with medium
- * erase counter. But we by no means can pick a physical
- * eraseblock with erase counter greater or equivalent than the
- * lowest erase counter plus %WL_FREE_MAX_DIFF.
- */
- first = rb_entry(rb_first(&ubi->free), struct ubi_wl_entry,
- u.rb);
- last = rb_entry(rb_last(&ubi->free), struct ubi_wl_entry, u.rb);
- if (last->ec - first->ec < WL_FREE_MAX_DIFF)
- e = rb_entry(ubi->free.rb_node,
- struct ubi_wl_entry, u.rb);
- else {
- medium_ec = (first->ec + WL_FREE_MAX_DIFF)/2;
- e = find_wl_entry(&ubi->free, medium_ec);
- }
- break;
- case UBI_SHORTTERM:
- /*
- * For short term data we pick a physical eraseblock with the
- * lowest erase counter as we expect it will be erased soon.
- */
- e = rb_entry(rb_first(&ubi->free), struct ubi_wl_entry, u.rb);
- break;
- default:
- BUG();
- }
- paranoid_check_in_wl_tree(e, &ubi->free);
- /*此处省略若干代码*/
- return e->pnum;
- }
- UBI中将数据类型分为三种:
- enum {
- UBI_LONGTERM = 1,
- UBI_SHORTTERM = 2,
- UBI_UNKNOWN = 3,
- };
如果数据时长期保持的话,那么就选择一个EC最大的,即损耗最严重的块用来保存静态数据。如果是短期保持,就选择一个EC最小的。即损耗最小的块用于保持动态数据。如果是UBI_UNKNOWN类型的,那么就选择一个EC为中间值的块用于保持数据。
上面三种都属于动态损耗管理,其管理的对象只有空闲块和脏块。虽然分配器只分配空闲块,但是垃圾回收的时机影响着空闲块的多少。所以仍未动态损耗均衡的对象包括脏块。其实由于UBIFS采用B+树进行EC管理,现在闪存期间的块数一般维持在1024~10240之间,所以其分配空闲块的时间消耗几乎是一个定值上下互动。而且相对于NAND闪存的读写这种I/O密集型操作,其分配所消耗时间几乎可以忽略不计。
静态损耗的对象则是系统中的所有物理块。下面摘取UBIFS中的一段静态损耗均衡代码。
由于代码太长,删除了其中的注释部分。
- static int ensure_wear_leveling(struct ubi_device *ubi)
- {
- int err = 0;
- struct ubi_wl_entry *e1;
- struct ubi_wl_entry *e2;
- struct ubi_work *wrk;
- spin_lock(&ubi->wl_lock);
- if (ubi->wl_scheduled)
- /* Wear-leveling is already in the work queue */
- goto out_unlock;
- if (!ubi->scrub.rb_node) {
- if (!ubi->used.rb_node || !ubi->free.rb_node)
- /* No physical eraseblocks - no deal */
- goto out_unlock;
- e1 = rb_entry(rb_first(&ubi->used), struct ubi_wl_entry, u.rb);//从已经使用的块中找到其EC值最小的块
- e2 = find_wl_entry(&ubi->free, WL_FREE_MAX_DIFF);//从空闲块中找出EC值最大的块
- if (!(e2->ec - e1->ec >= UBI_WL_THRESHOLD))//是否满足损耗均衡的条件
- goto out_unlock;
- dbg_wl("schedule wear-leveling");
- } else
- dbg_wl("schedule scrubbing");
- ubi->wl_scheduled = 1;
- spin_unlock(&ubi->wl_lock);
- wrk = kmalloc(sizeof(struct ubi_work), GFP_NOFS);
- if (!wrk) {
- err = -ENOMEM;
- goto out_cancel;
- }
- wrk->func = &wear_leveling_worker;//具体完成损耗均衡的函数,其实就是数据迁移
- schedule_ubi_work(ubi, wrk);
- return err;
- out_cancel:
- spin_lock(&ubi->wl_lock);
- ubi->wl_scheduled = 0;
- out_unlock:
- spin_unlock(&ubi->wl_lock);
- return err;
- }
从最上面的静态损耗均衡的定义可以看出,静态损耗均衡是由于系统中擦除次数最高值与擦除次数最低值超过某一个预先设定的阈值而触发的。但是静态损耗均衡带来了额外的数据迁移,也就带来了额外的损耗。所以静态损耗均衡的使用需要极其谨慎。
阅读(3368) | 评论(0) | 转发(0) |