Chinaunix首页 | 论坛 | 博客
  • 博客访问: 428918
  • 博文数量: 123
  • 博客积分: 2686
  • 博客等级: 少校
  • 技术积分: 1349
  • 用 户 组: 普通用户
  • 注册时间: 2009-12-23 22:11
文章分类
文章存档

2012年(3)

2011年(10)

2010年(100)

2009年(10)

我的朋友

分类: LINUX

2010-10-20 16:27:47

Data structure:
1. Each cache is represented by an instance of the kmem_cache structure:

/*
 * struct kmem_cache
 *
 * manages a cache.
 */


struct kmem_cache {
/* 1) per-cpu data, touched during every alloc/free */
    struct array_cache *array[NR_CPUS];//see comment 1
/* 2) Cache tunables. Protected by cache_chain_mutex */
    unsigned int batchcount;//see comment 2
    unsigned int limit;//see comment 3
    unsigned int shared;//see comment

    unsigned int buffer_size;//see comment 4
    u32 reciprocal_buffer_size;
/* 3) touched by every alloc & free from the backend */

    unsigned int flags;        /* constant flags, see comment 6 */
    unsigned int num;        /* # of objs per slab,see comment 8 */

/* 4) cache_grow/shrink */
    /* order of pgs per slab (2^n) */
    unsigned int gfporder;

    /* force GFP flags, e.g. GFP_DMA */
    gfp_t gfpflags;

    size_t colour;            /* cache colouring range */
    unsigned int colour_off;    /* colour offset */
    struct kmem_cache *slabp_cache;
    unsigned int slab_size;
    unsigned int dflags;        /* dynamic flags */

    /* constructor func */
    void (*ctor)(void *obj);

/* 5) cache creation/removal */
    const char *name;
    struct list_head next;

/* 6) statistics */
#ifdef CONFIG_DEBUG_SLAB
    unsigned long num_active;
    unsigned long num_allocations;
    unsigned long high_mark;
    unsigned long grown;
    unsigned long reaped;
    unsigned long errors;
    unsigned long max_freeable;
    unsigned long node_allocs;
    unsigned long node_frees;
    unsigned long node_overflow;
    atomic_t allochit;
    atomic_t allocmiss;
    atomic_t freehit;
    atomic_t freemiss;

    /*
     * If debugging is enabled, then the allocator can add additional
     * fields and/or padding to every object. buffer_size contains the total
     * object size including these internal fields, the following two
     * variables contain the offset to the user object and its size.
     */

    int obj_offset;
    int obj_size;//see comment 7
#endif /* CONFIG_DEBUG_SLAB */

    /*
     * We put nodelists[] at the end of kmem_cache, because we want to size
     * this array to nr_node_ids slots instead of MAX_NUMNODES
     * (see kmem_cache_init())
     * We still use [MAX_NUMNODES] and not [1] or [0] because cache_cache
     * is statically defined, so we reserve the max number of nodes.
     */

    struct kmem_list3 *nodelists[MAX_NUMNODES];//see Comment 5
    /*
     * Do not add fields after nodelists[]
     */

};


Comment 1:
array is a pointer to an array with an entry for each CPU in the system.
The kernel provides an instance of struct array_cache for each system cache.

/*
 * struct array_cache
 *
 * Purpose:
 * - LIFO ordering, to hand out cache-warm objects from _alloc
 * - reduce the number of linked list operations
 * - reduce spinlock operations
 *
 * The limit is stored in the per-cpu structure to reduce the data cache
 * footprint.
 *
 */
struct array_cache {
    unsigned int avail;//indicate the number of elements currently available.
    unsigned int limit;//the same as limit in kmem_cache.
    unsigned int batchcount;//the same as batchcount in kmem_cache.

/*touched is set to 1 when an element is removed from the cache, whereas cache shrinking causes touched to be set to 0. This enables the kernel to establish
whether a cache has been accessed since it was last shrunk and is an indicator of the importance of the cache. */
    unsigned int touched;
    spinlock_t lock;
    void *entry[]; //point a an array containing objects 
            /*

             * Must have this definition in here for the proper
             * alignment of array_cache. Also simplifies accessing
             * the entries.
             */
};

Comment 2:
batchcount specifies the number of objects to be taken from the slabs of a cache and added to the per-CPU list if it is empty. It also indicates the number of objects to be allocated when a cache is grown.

Comment 3:
limit specifies the maximum number of objects that may be held in a per-CPU list. If this value is exceeded, the kernel returns the number of objects defined in batchcount to the slabs .

Comment 4:
buffer_size specifies the size of the objects managed in the cache

Comment 5:
nodelists is an array that contains an entry for each possible node in the system. Each entry holds an instance of struct kmem_list3 that groups the three slab lists (full, free, partially free) together.

/*
 * The slab lists for all objects.
 */
struct kmem_list3 {
    struct list_head slabs_partial;    /* partial list first, better asm code */
    struct list_head slabs_full;
    struct list_head slabs_free;
    unsigned long free_objects;
    unsigned int free_limit;
    unsigned int colour_next;    /* Per-node cache coloring */
    spinlock_t list_lock;
    struct array_cache *shared;    /* shared per node */
    struct array_cache **alien;    /* on other nodes */
    unsigned long next_reap;    /* updated without locking */
    int free_touched;        /* updated without locking */
};

Comment 6:
flags is a flag register to define the global properties of the cache. Currently, there is only one flag bit. CFLGS_OFF_SLAB is set when the management structure is stored outside the slab.

Comment 7
objsize is the size of the objects in the cache, including all fill bytes added for alignment purposes.

Comment 8:
num holds the maximum number of objects that fit into a slab.






阅读(527) | 评论(1) | 转发(0) |
0

上一篇:About Slab Allocator

下一篇:kmem_cache_create()

给主人留下些什么吧!~~

chinaunix网友2010-10-21 10:02:45

很好的, 收藏了 推荐一个博客,提供很多免费软件编程电子书下载: http://free-ebooks.appspot.com