sgpool-128 3 4 3072 2 2 : tunables 24 12 8 : slabdata 2 2 0
sgpool-64 2 5 1536 5 2 : tunables 24 12 8 : slabdata 1 1 0
slab是如何分配其object的呢。创建一个slab时,每个slab占用几个page,每个slab中有几个object?这样的问题
不知道大家关注过没有。上面是我利用命令cat /proc/slabinfo得到的结果中的其中两行。从上面的结果来看,我们
知道这两个slab的名字叫做sgpool-128 sgpool-64,它们相关的信息如下
slab名字 object大小 有多少个object 占据几个page
sgpool-128 3072 2 2
sgpool-64 1536 5 2
下面的函数逐步增加page的order数目,遇到两个条件满足就ok
1、
if (gfporder >= slab_break_gfp_order)
break;
如果oreder大于0或者1,就推出,内存比较大的时候是1,具体是超过32M
2、
/*
* Acceptable internal fragmentation?
*/
if (left_over * 8 <= (PAGE_SIZE << gfporder))
break;
如果浪费的空间没有超过总空间的1/8也是算满足了要求
上面我们两个例子,都是当使用一个page是还没有满足剩余浪费的内存是总的内存的1/8要小,所以
果断选择了两个page的情况
-
/**
-
* calculate_slab_order - calculate size (page order) of slabs
-
* @cachep: pointer to the cache that is being created
-
* @size: size of objects to be created in this cache.
-
* @align: required alignment for the objects.
-
* @flags: slab allocation flags
-
*
-
* Also calculates the number of objects per slab.
-
*
-
* This could be made much more intelligent. For now, try to avoid using
-
* high order pages for slabs. When the gfp() functions are more friendly
-
* towards high-order requests, this should be changed.
-
*/
-
static size_t calculate_slab_order(struct kmem_cache *cachep,
-
size_t size, size_t align, unsigned long flags)
-
{
-
unsigned long offslab_limit;
-
size_t left_over = 0;
-
int gfporder;
-
-
/*逐步增加oreder来估计*/
-
for (gfporder = 0; gfporder <= KMALLOC_MAX_ORDER; gfporder++) {
-
unsigned int num;
-
size_t remainder;
-
-
cache_estimate(gfporder, size, align, flags, &remainder, &num);
-
if (!num)
-
continue;
-
-
if (flags & CFLGS_OFF_SLAB) {
-
/*
-
* Max number of objs-per-slab for caches which
-
* use off-slab slabs. Needed to avoid a possible
-
* looping condition in cache_grow().
-
*/
-
offslab_limit = size - sizeof(struct slab);
-
offslab_limit /= sizeof(kmem_bufctl_t);
-
-
if (num > offslab_limit)
-
break;
-
}
-
-
/* Found something acceptable - save it away */
-
cachep->num = num;
-
cachep->gfporder = gfporder;
-
left_over = remainder;
-
-
/*
-
* A VFS-reclaimable slab tends to have most allocations
-
* as GFP_NOFS and we really don't want to have to be allocating
-
* higher-order pages when we are unable to shrink dcache.
-
*/
-
if (flags & SLAB_RECLAIM_ACCOUNT)
-
break;
-
-
/*
-
* Large number of objects is good, but very large slabs are
-
* currently bad for the gfp()s.
-
*/
-
if (gfporder >= slab_break_gfp_order)
-
break;
-
-
/*
-
* Acceptable internal fragmentation?
-
*/
-
if (left_over * 8 <= (PAGE_SIZE << gfporder))
-
break;
-
}
-
return left_over;
-
}
-
/*
-
* Calculate the number of objects and left-over bytes for a given buffer size.
-
*/
-
static void cache_estimate(unsigned long gfporder, size_t buffer_size,
-
size_t align, int flags, size_t *left_over,
-
unsigned int *num)
-
{
-
int nr_objs;
-
size_t mgmt_size;
-
size_t slab_size = PAGE_SIZE << gfporder;
-
-
/*
-
* The slab management structure can be either off the slab or
-
* on it. For the latter case, the memory allocated for a
-
* slab is used for:
-
*
-
* - The struct slab
-
* - One kmem_bufctl_t for each object
-
* - Padding to respect alignment of @align
-
* - @buffer_size bytes for each object
-
*
-
* If the slab management structure is off the slab, then the
-
* alignment will already be calculated into the size. Because
-
* the slabs are all pages aligned, the objects will be at the
-
* correct alignment when allocated.
-
*/
-
if (flags & CFLGS_OFF_SLAB) {
-
mgmt_size = 0;
-
nr_objs = slab_size / buffer_size;
-
-
if (nr_objs > SLAB_LIMIT)
-
nr_objs = SLAB_LIMIT;
-
} else {
-
/*
-
* Ignore padding for the initial guess. The padding
-
* is at most @align-1 bytes, and @buffer_size is at
-
* least @align. In the worst case, this result will
-
* be one greater than the number of objects that fit
-
* into the memory allocation when taking the padding
-
* into account.
-
*/
-
nr_objs = (slab_size - sizeof(struct slab)) /
-
(buffer_size + sizeof(kmem_bufctl_t));
-
-
/*
-
* This calculated number will be either the right
-
* amount, or one greater than what we want.
-
*/
-
if (slab_mgmt_size(nr_objs, align) + nr_objs*buffer_size
-
> slab_size)
-
nr_objs--;
-
-
if (nr_objs > SLAB_LIMIT)
-
nr_objs = SLAB_LIMIT;
-
-
mgmt_size = slab_mgmt_size(nr_objs, align);
-
}
-
*num = nr_objs;
-
*left_over = slab_size - nr_objs*buffer_size - mgmt_size;
-
}
阅读(838) | 评论(0) | 转发(0) |