Chinaunix首页 | 论坛 | 博客
  • 博客访问: 582151
  • 博文数量: 353
  • 博客积分: 1104
  • 博客等级: 少尉
  • 技术积分: 1457
  • 用 户 组: 普通用户
  • 注册时间: 2008-12-23 23:02
个人简介

1、刚工作时做Linux 流控;后来做安全操作系统;再后来做操作系统加固;现在做TCP 加速。唉!没离开过类Unix!!!但是水平有限。。

文章存档

2015年(80)

2013年(4)

2012年(90)

2011年(177)

2010年(1)

2009年(1)

分类:

2013-01-07 17:56:43

Linux内存管理

wjcdx@qq.com

@仅供学习交流,勿作商业使用

Linux Kernel Code: 2.6.35.7

 

ULK3: A.1. Prehistoric Age: the BIOS

The BIOS uses Real Mode addresses because they are the only ones available when the computer is turned on. A Real Mode address is composed of a seg segment and an off offset; the corresponding physical address is given by seg*16+off. As a result, no Global Descriptor Table, Local Descriptor Table, or paging table is needed by the CPU addressing circuit to translate a logical address into a physical one. Clearly, the code that initializes the GDT, LDT, and paging tables must run in Real Mode.

 

Intel Manual 3a: 3.1 MEMORY MANAGEMENT OVERVIEW

When operating in protected mode, some form of segmentation must be used. There is no mode bit to disable segmentation. The use of paging, however, is optional.

 

In protected mode, the IA-32 architecture provides a normal physical address space of 4 GBytes (232 bytes). This is the address space that the processor can address on its address bus.

CPU的寻址空间取决于其地址总线宽度

 

At the system-architecture level in protected mode, the processor uses two stages of address translation to arrive at a physical address: logical-address translation and linear address space paging.

Even with the minimum use of segments, every byte in the processor’s address space is accessed with a logical address.

所有地址最开始都是逻辑地址:指令使用的操作数的地址都是逻辑地址。

 

 

Software enables paging by using the MOV to CR0 instruction to set CR0.PG. Before doing so, software should ensure that control register CR3 contains the physical address of the first paging structure that the processor will use for linear-address translation (see Section 4.2) and that structure is initialized as desired.

 

如何确定页目录中存储的是物理地址,还是下一级页目录?

64-ia-32-architectures-software-developer-vol-3a-3b-system-programming-manual

4.2 HIERARCHICAL PAGING STRUCTURES: AN OVERVIEW

In the examples above, a paging-structure entry maps a page with 4-KByte page frame when only 12 bits remain in the linear address; entries identified earlier always reference other paging structures. That may not apply in other cases. The following items identify when an entry maps a page and when it references another paging structure:

  • If more than 12 bits remain in the linear address, bit 7 (PS -- page size) of the current paging-structure entry is consulted. If the bit is 0, the entry references another paging structure; if the bit is 1, the entry maps a page.
  • If only 12 bits remain in the linear address, the current paging-structure entry always maps a page (bit 7 is used for other purposes)

If a paging-structure entry maps a page when more than 12 bits remain in the linear address, the entry identifies a page frame larger than 4 KBytes. For example, 32-bit paging uses the upper 10 bits of a linear address to locate the first paging-structure entry; 22 bits remain. If that entry maps a page, the page frame is 222 Bytes = 4MBytes. 32-bit paging supports 4-MByte pages if CR4.PSE = 1. PAE paging and IA-32e paging support 2-MByte pages (regardless of the value of CR4.PSE). IA-32e paging may support 1-GByte pages (see Section 4.1.4).

 

cache & tlb

general hardware cache:

  • 缓存物理地址到内存的映射;
  • 需要在cachemain memory之间,不同CPUcache之间同步内容;
  • 因为是物理地址,所以不需要在进程切换时invalidate cache

tlb cache:

  • 缓存 线性地址 -> 物理地址 的映射;
  • 不同的CPU有不同的tlb,无需同步;
  • 因为在不同的进程中,相同的线性地址可能对应不同的物理地址,所以在进程切换(页表切换[update cr3])时,会自动invalidate tlb cache;

 

页表初始化:
  1. 页表各层次存放在什么位置?
  2. 页表每一项中存放什么内容?

 

 

从上图可以看出,PGD的地址存放在cr3PGD中存有PUD的基址,以此类推,所以各级PxD的存放位置,可以有一定的灵活性;

 

PAGE_OFFSET

arch/x86/include/asm/page_32_types.h:

#define __PAGE_OFFSET           _AC(CONFIG_PAGE_OFFSET, UL)

 

The PAGE_OFFSET macro yields the value 0xc0000000; this is the offset in the linear address space of a process where the kernel lives.(ULK3)

 

补:head_32.S中临时页表初始化: #else   /* Not PAE */
page_pde_offset
= (__PAGE_OFFSET >> 20); // 0xc00/4 = 768
        movl $pa
(__brk_base), %edi // 根据vmlinux.lds.S, __brk_basereserve64 * 1024的空间,用于存放临时页表
        movl $pa
(swapper_pg_dir), %edx //page global dir pointer
        movl $PTE_IDENT_ATTR
, %eax  //页表项属性: PTE_IDENT_ATTR = 0x3
10:    
        leal PDE_IDENT_ATTR
(%edi),%ecx          /* Create PDE entry */ //页全局目录存放页表项地址,这里获取临时页表地址,也就是__brk_base
        movl
%ecx,(%edx)                        /* Store identity PDE entry */ //页全局目录第0项,指向__brk_base
        movl
%ecx,page_pde_offset(%edx)         /* Store kernel PDE entry */ //0+768 = 768, 页全局目录第768项,指向__brk_base
        addl $4
,%edx //edx指向页全局目录下一项,这里也是0xc00/4中除以4的缘由,一个表项占4个字节
        movl $1024
, %ecx  //计数器,要设置1024个页表项
11:    
        stosl
        addl $0x1000
,%eax //%eax = 0x1003, 0x2003, ..., 0x400003, 0x401003
        loop
11b
       
/*
         * End condition: we must map up to the end + MAPPING_BEYOND_END.
         */
movl $pa(_end) + MAPPING_BEYOND_END + PTE_IDENT_ATTR, %ebp
        cmpl
%ebp,%eax //如果临时页表没有把所有的kernel image还有另外reserved内存映射完,
                       
//则分配下一个全局页目录项,继续映射,所以,这里不仅仅是8M,根据需要,可以是12M, 16M.
        jb
10b
        addl $__PAGE_OFFSET
, %edi
        movl
%edi, pa(_brk_end)
        shrl $12
, %eax
        movl
%eax, pa(max_pfn_mapped) // 存储映射的最大页框号
       
/* Do early initialization of the fixmap area */
        movl $pa
(swapper_pg_fixmap)+PDE_IDENT_ATTR,%eax
        movl
%eax,pa(swapper_pg_dir+0xffc) #endif

 

页表初始化的第二阶段: setup_arch(&command_line); /* 其中进行页表的第二阶段初始化 */

build_all_zonelists
(NULL);
page_alloc_init
();

vfs_caches_init_early
();
mm_init
();

 

呵呵,这里先暂时之研究页表初始化部分

 

x86_init

defined in arch/x86/kernel/x86_init.c:

struct x86_init_ops x86_init __initdata = {

       
.resources = {
               
.probe_roms             = x86_init_noop,
               
.reserve_resources      = reserve_standard_io_resources,
               
.memory_setup           = default_machine_specific_memory_setup,
       
},

 

E820 setup_arch()->setup_memory_map()->x86_init.resources.memory_setup(); e820_print_map(who);
BIOS
-provided physical RAM map:
 BIOS
-e820: 0000000000000000 - 000000000009f800 (usable)
 BIOS
-e820: 000000000009f800 - 00000000000a0000 (reserved)
 BIOS
-e820: 00000000000ca000 - 00000000000cc000 (reserved)
 BIOS
-e820: 00000000000dc000 - 0000000000100000 (reserved)
 BIOS
-e820: 0000000000100000 - 000000001fef0000 (usable)
 BIOS
-e820: 000000001fef0000 - 000000001feff000 (ACPI data)
 BIOS
-e820: 000000001feff000 - 000000001ff00000 (ACPI NVS)
 BIOS
-e820: 000000001ff00000 - 0000000020000000 (usable)
 BIOS
-e820: 00000000fec00000 - 00000000fec10000 (reserved)
 BIOS
-e820: 00000000fee00000 - 00000000fee01000 (reserved)
 BIOS
-e820: 00000000fffe0000 - 0000000100000000 (reserved)

 

http://blog.chinaunix.net/space.php?uid=1701789&do=blog&id=263951

 

e820确定各段内存状态;映射低端内存(896M)和高端内存;zone相关操作;有点琐碎,却也好理解

buddy system, slab初始化;

 

低端内存页表初始化 /*
 * Setup the *_direct mapping_* of the physical memory at PAGE_OFFSET.
 * This runs before bootmem is initialized and gets pages directly from
 * the physical memory. To access them they are temporarily mapped.
 */

unsigned long __init_refok init_memory_mapping(unsigned long start,
                                               
unsigned long end)

 

setup_arch()->init_memory_mapping()->kernel_physical_mapping_init()

cr3还是swapper_pg_dirpmdpte不存在的重新分配;

 

为了弄清楚分配页表时,针对哪些内存,需要弄清楚:

  • e820对内存的处理;
  • init_memory_mapping的处理;

 

early_ioremap_init(): 初始化ioremapfixmap段,占一个pmd

setup_memory_map(): e820 memory setup, ram size and status.

pfn: 页框号

  • max_pfn: 物理内存最大地址对应的页框号;
  • max_low_pfn: 低端内存最大页框号;
  • max_pfn_mapped:
  • max_low_pfn_mapped:

 

TODO: 启动初期内存页的分配方式

 

in setup_arch:

/* How many end-of-memory variables you have, grandma! */

究竟有多少标识内存结束的变量,NND!

 

find_low_pfn_range

/* max_pfn_mapped is updated here */
max_low_pfn_mapped
= init_memory_mapping(0, max_low_pfn<<PAGE_SHIFT);
max_pfn_mapped
= max_low_pfn_mapped;

max_low_pfn是低端内存的最大页框号,

#ifdef CONFIG_X86_32
       
/* max_low_pfn get updated here */
        find_low_pfn_range
();
#else

||
\/

void __init find_low_pfn_range(void)
{
       
/* it could update max_pfn */

       
if (max_pfn <= MAXMEM_PFN)
                lowmem_pfn_init
();
       
else
                highmem_pfn_init
();
}

||
\/

/*
 * All of RAM fits into lowmem - but if user wants highmem
 * artificially via the highmem=x boot parameter then create
 * it:
 */

void __init lowmem_pfn_init(void)

==

/*
 * We have more RAM than fits into lowmem - we try to put it into
 * highmem, also taking the highmem=x boot parameter into account:
 */

void __init highmem_pfn_init(void)

  • lowmem_pfn_init: sizeof(RAM) < 896MB,所有的内存都在低端内存的范围内;
  • hignmem_pfn_init: sizeof(RAM) 大于动态映射区的起始地址(896MB);

当然注释中,还提到了启动参数中有highmem参数的情况,这里忽略;

 

init_memory_mapping

max_low_pfn_mapped = init_memory_mapping(0, max_low_pfn<<PAGE_SHIFT);

||
\/

/*
 * Setup the direct mapping of the physical memory at PAGE_OFFSET.
 * This runs before bootmem is initialized and gets pages directly from
 * the physical memory. To access them they are temporarily mapped.
 */
unsigned long __init_refok init_memory_mapping(unsigned long start,
                                               unsigned long end)

从中可以看出:

  1. init_memory_mapping建立的是直接映射区的页表;
  2. 映射的页框是0~max_low_pfn,亦即低端内存的范围;
  3. 调用init_memory_mapping时,bootmem(极有可能是boot time page allocator,目前还不确定)还没有初始化,所以需要直接从物理能存获取内存页;

 

init_memory_mapping中使用了struct map_range:

struct map_range mr[NR_RANGE_MR];

struct map_range {
        unsigned long start;
        unsigned long end;
        unsigned page_size_mask;
};

#ifdef CONFIG_X86_32
#define NR_RANGE_MR 3
#else /* CONFIG_X86_64 */
#define NR_RANGE_MR 5
#endif

 

这里的map_rangee820.map的区别之处在于:

e820.mapstart_addrsize都是64位的,所以,可以表示超过4G的内存范围;

 

接下来要研究的是:

  1. 第二阶段页表的建立,包含直接映射区,动态映射区,永久映射区,固定映射区等;
  2. 页表存放在什么位置;
  3. 建立页表时,bootmem初始化后,内存页的分配使用;

/* head if not big page alignment ? */
        start_pfn = start >> PAGE_SHIFT;
        pos = start_pfn << PAGE_SHIFT;
#ifdef CONFIG_X86_32
        /*
         * Don't use a large page for the first 2/4MB of memory
         * because there are often fixed size MTRRs in there
         * and overlapping MTRRs into large pages can cause
         * slowdowns.
         */

        if (pos == 0)
                end_pfn = 1<<(PMD_SHIFT - PAGE_SHIFT);
        else
                end_pfn = ((pos + (PMD_SIZE - 1))>>PMD_SHIFT)
                                 << (PMD_SHIFT - PAGE_SHIFT);

  • (PMD_SHIFT - PAGE_SHIFT) 表示一个PMD中的页框数;
  • “1”“((pos + (PMD_SIZE - 1))>>PMD_SHIFT)”PMD(N),所以end_pfn即是第NPMD的最后一个页框的编号;

 

上面是基于页框号从1开始,从1开始时,end_pfn为最后一个页框的编号;若从0开始,则为第一个页框的编号;

准确的说,“1”“((pos + (PMD_SIZE - 1))>>PMD_SHIFT)”表示“pos”所在的PMD号,从1开始;

 

struct map_range

/* big page (2M) range */的时候,亦即第二次计算mr的时候,pos(pos = end_pfn << PAGE_SHIFT;)就已经指向第二个(下一个)PMD的起始地址了。

可以看出这三次计算map_range是为了将内存按PMD对齐的地址分段,第一个PMD对齐的地址之前的开头和最后一个PMD对齐的地址之后的结尾,以小页管理;而中间整数倍个PMD的内存按大页管理;

 

page_size_mask: 0 : 4K 1<

save_mr很简单,将map_range存在mr中,注意三段内存的调用方式:

  • 开头:nr_range = save_mr(mr, nr_range, start_pfn, end_pfn, 0);
  • 中间:nr_range = save_mr(mr, nr_range, start_pfn, end_pfn, page_size_mask & (1<;
  • 结尾:nr_range = save_mr(mr, nr_range, start_pfn, end_pfn, 0);

接下来,try to merge same page size and continuous,然后把mr打印出来;

Find space for the kernel direct mapping tables. 到初期内存分配机制了

pmd, pud, pte, pgd等变量值的影响还不是很大,暂时不用解决;但最后还是要弄清楚 相关宏的定义的;

#define roundup(x, y) \
(
  (
   (
    (x) + ( (y) - 1 )
                     ) / (y)
   ) * (y)
)

e820_table_start:
e820_table_end:
e820_table_top:

 

接下来要研究的:
  • 直接映射区、动态映射区、永久映射区、固定映射区的页表初始化;
  • initmem_initbootmem allocator;
  • zone memory init;
  • NUMA;
  • buddy system;
  • slab allocator;
  • fs/swap;

 

 

关于max_pfn_mapped的迷惘

错误观点:在CONFIG_X86_32的情况下,max_pfn_mapped直到max_low_pfn_mapped设置之后,才被设置

 

init_memory_mapping函数初始化直接映射区,亦即低端内存的页表,他的调用方式如下:

max_low_pfn_mapped = init_memory_mapping(0, max_low_pfn<<PAGE_SHIFT);
max_pfn_mapped
= max_low_pfn_mapped;

函数原型如下:

unsigned long __init_refok init_memory_mapping(unsigned long start,
                                               
unsigned long end)

max_low_pfn<被当做end传入,经对init_memory_mapping函数的观察,其中会调用

find_early_table_space(end, use_pse, use_gbpages);

e820.map中查找能够容纳下所有页表的内存段,此时传入的end也是max_low_pfn<,并且该end会被作为查找到内存段的依据使用;

诚如前面评论所说,经过对setup_arch函数的观察,max_low_pfninit_memory_mapping函数结束之后才会被设置,那此时其值为0,0如何作为查找依据呢;

经过搜寻发现,在head_32.S中建立临时页表时,有对max_low_pfn进行设置:

shrl $12, %eax
movl
%eax, pa(max_pfn_mapped)

好吧,我错了,这里传入的是max_low_pfn,而在head_32.S建立临时页表时,更新的是max_pfn_mapped,不过也可以得出一个结论:

max_pfn_mapped是被动更新的,其值表示当前映射的最大页框号。

PS max_low_pfn在之前的find_low_pfn_range()中更新;

 

页表空间的分配

find_early_table_space首先计算各级页目录所占用的内存页数,然后调用

e820_table_start = find_e820_area(start, max_pfn_mapped<<PAGE_SHIFT,
                                        tables
, PAGE_SIZE);

查找e820.map中能装下各级页目录的内存段;并且页表的地址不能超过已映射的最大地址, e820.map.start + sizeof(各级页目录) < max_pfn_mapped<

如上一评论所说,max_pfn_mapped在建立临时页表时被设置,标识当前映射的最大的页框号;

那如何确保max_pfn_mapped中能够存下各级页目录呢?

还记得建立临时页表时额外映射了一段RESERVED的空间:

        /*
         * End condition: we must map up to the end + MAPPING_BEYOND_END.
         */
movl $pa(_end) + MAPPING_BEYOND_END + PTE_IDENT_ATTR, %ebp
        cmpl
%ebp,%eax
        jb
10b

MAPPING_BEYOND_END看名字就与映射有关,转到它的定义:

/* Enough space to fit pagetables for the low memory linear map */
MAPPING_BEYOND_END
= \
        PAGE_TABLE_SIZE
(((1<<32) - __PAGE_OFFSET) >> PAGE_SHIFT) << PAGE_SHIFT

注释部分已说明了,它足够存放所有的页表;

2^324GPAGE_OFFSET3G,翻译一下定义就是1G内核空间(0xc0000000~0xffffffff)的页表的大小;

PAGE_TABLE_SIZE的定义:

#if PTRS_PER_PMD > 1
#define PAGE_TABLE_SIZE(pages) (((pages) / PTRS_PER_PMD) + PTRS_PER_PGD)
#else
#define PAGE_TABLE_SIZE(pages) ((pages) / PTRS_PER_PGD)
#endif

看第二种情况,PTRS_PER_PMD不大于1,则等于1,说明只有PGDPTE两级页目录,PGD10位,PTE10位,Offset12位;

numberof(pages)/PTRS_PER_PGD只是页表(PTE)所占的内存空间的大小,那PGD的不计在内吗?是否太冒险?

不冒险,PGD存储在swap_pg_dir所指的数组里,已经静态分配好了。

这里不知道为什么不用PTRS_PER_PTEPTRS_PER_PMD > 1PGD/PMD/PTE三级页目录的情况也暂时看不懂;

 

 中印证了没有为PGD分配空间的说法:

1024 should be enough; the pgd is still swapper_pg_dir, and there are no pmds.

 

find_early_table_space找到足够的空间来存放页表,并初始化标识这段页表内存空间的几个变量:

  • e820_table_start: 页表空间的起始地址;
  • e820_table_end 当前已用页表空间的结束地址;
  • e820_table_top 页表空间的结束地址;

 

Linux页目录结构

下面是kernel_physical_mapping_init,进行页表映射了,在看此函数之前,要先看一下Linux中页表层级的定义。

Linuxx86定义了两种层级:

  • 三级页目录:PGD/PMD/PTE
  • 两级页目录:PGD/PTE

PxD_SHIFTPTRS_PER_PxD中实际使用的定义于pgtable-2level_types.hpgtable-3level_types.h

/*
 * traditional i386 two-level paging structure:
 */


#define PGDIR_SHIFT     22
#define PTRS_PER_PGD    1024


/*
 * the i386 is two-level, so we don't really have any
 * PMD directory physically.
 */


#define PTRS_PER_PTE    1024

未使用的定义于include/asm-generic/pgtable-nop{u,m}d.h:

#define PMD_SHIFT       PUD_SHIFT
#define PTRS_PER_PMD    1
#define PMD_SIZE        (1UL << PMD_SHIFT)
#define PMD_MASK        (~(PMD_SIZE-1))

这里,我们研究最普通的情况:

  • 二级页目录
  • no pse
  • no pae

 

alloc & init PGD/PMD/PTE

kernel_physical_mapping_init循环映射各级页表。

调用one_md_table_init分配存储PMD目录的内存页,调用one_page_table_init分配存储PTE页表的内存页。这里分配的是前面刚找到的内存空间,分配是以页为单位的。 pte = one_page_table_init(pmd);

/*
 * Create a page table and place a pointer to it in a middle page
 * directory entry:
 */

static pte_t * __init one_page_table_init(pmd_t *pmd)
{
       
if (!(pmd_val(*pmd) & _PAGE_PRESENT)) {
                pte_t
*page_table = NULL;

               
if (after_bootmem) { //此时还没初始化
#if defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_KMEMCHECK)
                        page_table
= (pte_t *) alloc_bootmem_pages(PAGE_SIZE);
#endif
                       
if (!page_table)
                                page_table
=
                               
(pte_t *)alloc_bootmem_pages(PAGE_SIZE);
               
} else
                        page_table
= (pte_t *)alloc_low_page(); //分配内存页

                paravirt_alloc_pte
(&init_mm, __pa(page_table) >> PAGE_SHIFT);
                set_pmd
(pmd, __pmd(__pa(page_table) | _PAGE_TABLE)); //将新分配的PTE页的地址设置到PMD entry
                BUG_ON
(page_table != pte_offset_kernel(pmd, 0));
       
}

       
return pte_offset_kernel(pmd, 0);
}

||
\/

static __init void *alloc_low_page(void)
{
       
unsigned long pfn = e820_table_end++;
       
void *adr;

       
if (pfn >= e820_table_top)
                panic
("alloc_low_page: ran out of memory");

        adr
= __va(pfn * PAGE_SIZE);
        memset
(adr, 0, PAGE_SIZE);
       
return adr;
}

这里,通过e820_table_end的使用,我们了解到e820_table_end等的意义。

 

页目录项属性

__pmd(__pa(page_table) | _PAGE_TABLE)

pmd entry的属性也设置了,是_PAGE_TABLE

舍弃临时页表?

这里如何处理临时映射的页表呢?

仍使用原来的页表;

one_page_table_init最开始会判断页表是否已存在:if (!(pmd_val(*pmd) & _PAGE_PRESENT)) {;

临时页表映射的页表已存在,直接返回其中存储的页表的地址return pte_offset_kernel(pmd, 0);

 

高端内存页表初始化

如上面的图中所显示,3G~4G的内核空间被分成了直接映射区、动态映射区(vmalloc)、永久映射区(kmap)、固定映射区(fixed mapping), 前面已经对直接映射区也就是低端内存的映射有了一定的了解。

接下来也了解一下高端内存的映射情况。

 

固定映射区

首先,跟踪init_memory_mapping,其中在初始化完成lowmem之后,调用了early_ioremap_page_table_range_init()初始化fixed mapping固定映射区。

void __init early_ioremap_page_table_range_init(void)
{
        pgd_t *pgd_base = swapper_pg_dir;
        unsigned long vaddr, end;

        /*
         * Fixed mappings, only the page table structure has to be
         * created - mappings will be set by set_fixmap():
         */
        vaddr = __fix_to_virt(__end_of_fixed_addresses - 1) & PMD_MASK;
        end = (FIXADDR_TOP + PMD_SIZE - 1) & PMD_MASK;
        page_table_range_init(vaddr, end, pgd_base);
        early_ioremap_reset();
}

||
\/

enum fixed_addresses {
#ifdef CONFIG_X86_32
        FIX_HOLE,
        FIX_VDSO,
#else
        VSYSCALL_LAST_PAGE,
        VSYSCALL_FIRST_PAGE = VSYSCALL_LAST_PAGE
                            + ((VSYSCALL_END-VSYSCALL_START) >> PAGE_SHIFT) - 1,
****

#ifdef CONFIG_INTEL_TXT
        FIX_TBOOT_BASE,
#endif
        __end_of_fixed_addresses
}

==

extern unsigned long __FIXADDR_TOP;
#define FIXADDR_TOP     ((unsigned long)__FIXADDR_TOP)

unsigned long __FIXADDR_TOP = 0xfffff000; //顶端4K的隔离区
EXPORT_SYMBOL(__FIXADDR_TOP);

||
\/

#define __fix_to_virt(x)        (FIXADDR_TOP - ((x) << PAGE_SHIFT)) //这里使用FIXADDR_TOP减去传入的fix addr编号
#define __virt_to_fix(x)        ((FIXADDR_TOP - ((x)&PAGE_MASK)) >> PAGE_SHIFT)

由上可以看出:

page_table_range_init(vaddr, end, pgd_base);

vaddrfixed addr区最小的地址,end是最大的地址,由page_table_range_init完成这个区页表的初始化任务:从pgd关联到页表,但页表中的entry并不关联到实际的内存页。

 

end = pmd_number_of(FIXADDR_TOP) + 1

(FIXADDR_TOP + PMD_SIZE - 1) & PMD_MASK相当于上取整,对除以PMD_SIZE所得的商上取整。

其中PMD_SIZE = 1 << PMD_SHIFT

 

回退到setup_arch中,之后其调用initmem_init(0, max_pfn, acpi, k8);初始化bootmem allocator,逻辑有点复杂,暂不研究。

再之后,高端内存区初始化部分

        x86_init.paging.pagetable_setup_start(swapper_pg_dir);
        paging_init
();
        x86_init
.paging.pagetable_setup_done(swapper_pg_dir);

||
\/

       
.paging = {
               
.pagetable_setup_start  = native_pagetable_setup_start,
               
.pagetable_setup_done   = native_pagetable_setup_done,
       
},

native_pagetable_setup_done为空,native_pagetable_setup_start清空高端内存的页表(pte)映射。

 

下面就到了名不副实的paging_init()

void __init paging_init(void)
{
        pagetable_init
(); //初始化永久映射区

        __flush_tlb_all
();

        kmap_init
(); //固定映射区的进一步初始化

       
/*
         * NOTE: at this point the bootmem allocator is fully available.
         */

        sparse_init
();
        zone_sizes_init
();
}

 

永久映射区 static void __init pagetable_init(void)
{
        pgd_t
*pgd_base = swapper_pg_dir;

        permanent_kmaps_init
(pgd_base);
}

||
\/

static void __init permanent_kmaps_init(pgd_t *pgd_base)
{
       
unsigned long vaddr;
        pgd_t
*pgd;
        pud_t
*pud;
        pmd_t
*pmd;
        pte_t
*pte;

        vaddr
= PKMAP_BASE;
        page_table_range_init
(vaddr, vaddr + PAGE_SIZE*LAST_PKMAP, pgd_base);

        pgd
= swapper_pg_dir + pgd_index(vaddr);
        pud
= pud_offset(pgd, vaddr);
        pmd
= pmd_offset(pud, vaddr);
        pte
= pte_offset_kernel(pmd, vaddr);
        pkmap_page_table
= pte;
}

#define VMALLOC_START   ((unsigned long)high_memory + VMALLOC_OFFSET)
#ifdef CONFIG_X86_PAE
#define LAST_PKMAP 512
#else
#define LAST_PKMAP 1024
#endif

建立4M永久映射区的页表;

 

static void __init kmap_init(void)
{
       
unsigned long kmap_vstart;

       
/*
         * Cache the first kmap pte:
         */

        kmap_vstart
= __fix_to_virt(FIX_KMAP_BEGIN);
        kmap_pte
= kmap_get_fixmap_pte(kmap_vstart);

        kmap_prot
= PAGE_KERNEL;
}

||
\/

#ifdef CONFIG_X86_32
        FIX_KMAP_BEGIN
, /* reserved pte's for temporary kernel mappings */
        FIX_KMAP_END
= FIX_KMAP_BEGIN+(KM_TYPE_NR*NR_CPUS)-1,

FIX_KMAP_BEGIN~FIX_KMAP_END 之间的固定映射区是per-cpu的,这里的意义,要到研究固定映射区的使用时才清楚。

 

页表小结

整个内核空间及其页表的初始化的研究基本上已经结束了。

  • 我们看到直接映射区从线性地址到物理地址的映射都已经完成,页表存放在brk_base之后,这里保留的空间足够存放1G内核空间全部的页表;
  • 也看到对高端内存的固定映射区和永久映射区,从pgdpte的映射的建立。从ptephysic address的映射应该要等到使用时才建立。
  • 我们始终文件到对动态映射区的初始化,可能从pgd->pte->phy_addr的映射都在使用时动态建立。(我也从Linux学习对vmalloc区域使用的分析中,看到了一点印证http://blog.chinaunix.net/space.php?uid=20543183&do=blog&id=1930785

 

bootmem allocator:

 

Reserve内存区

e820_table_start是什么位置呢?

1. 是在brk_base之后,跟在临时页表后面吗?

2. 如何确保加载内核代码的内存不被覆盖呢?

满足第一个要求,可以有两种途径:

  1. find_early_table_space中,传入_end作为start参数,使得返回的地址从_end开始;
  2. find_early_table_space之前,对e820.map中的_end之前的代码段进行reserve

若要满足第二个,则只能使用e820.map的方法了,并且之后,还要根据内存的使用情况,初始化其他内存分配器。

 

e820的初始化即在setup_arch中,跟踪代码并没有发现reserve kernel image的地方,网上资料相关甚少。

好吧,把内核编译出来实际看看打印出来的e820.map

diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index cdb4ae9
..b278535 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -75,7 +75,6 @@ static void __init find_early_table_space(unsigned long end, int use_pse,
 
#else
        start
= 0x8000;
 
#endif
+       e820_print_map("wjcdx");
        e820_table_start
= find_e820_area(start, max_pfn_mapped<<PAGE_SHIFT,
                                        tables
, PAGE_SIZE);
       
if (e820_table_start == -1UL)

打印出的信息:

BIOS-provided physical RAM map:
 BIOS
-e820: 0000000000000000 - 000000000009f800 (usable)
 BIOS
-e820: 000000000009f800 - 00000000000a0000 (reserved)
 BIOS
-e820: 00000000000ca000 - 00000000000cc000 (reserved)
 BIOS
-e820: 00000000000dc000 - 00000000000e4000 (reserved)
 BIOS
-e820: 00000000000e8000 - 0000000000100000 (reserved)
 BIOS
-e820: 0000000000100000 - 000000003fef0000 (usable)
 BIOS
-e820: 000000003fef0000 - 000000003feff000 (ACPI data)
 BIOS
-e820: 000000003feff000 - 000000003ff00000 (ACPI NVS)
 BIOS
-e820: 000000003ff00000 - 0000000040000000 (usable)
 BIOS
-e820: 00000000fec00000 - 00000000fec10000 (reserved)
 BIOS
-e820: 00000000fee00000 - 00000000fee01000 (reserved)
 BIOS
-e820: 00000000fffe0000 - 0000000100000000 (reserved)

******

kernel direct mapping tables up to
377fe000 @ 15000-1a000

可以看到:

e820_table_start = 0x15000 000
e820_table_top  
= 0x1a000 000

  1. e820.map中,这个范围是usable的;
  2. e820_table_start远大于0x100000(1M),所以直接映射区各级页目录存储的

位置并不是紧跟在临时映射的页表的后面;

接着又来了一个问题,调用find_e820_area时,传入的start0x{7,8}000,为什么获得的e820_table_start却这么大呢?

跟进,e820_table_start()->find_early_area()->bad_addr(),只有在bad_addr()中,会改变start

/* Check for already reserved areas */
static inline int __init bad_addr(u64 *addrp, u64 size, u64 align)
{
       
int i;
        u64 addr
= *addrp;
       
int changed = 0;
       
struct early_res *r;
again
:
        i
= find_overlapped_early(addr, addr + size);
        r
= &early_res[i];
       
if (i < max_early_res && r->end) {
               
*addrp = addr = round_up(r->end, align);
                changed
= 1;
               
goto again;
       
}
       
return changed;
}

这个函数检查全局数组early_res中存储的保留内存的信息,如果有重叠,则后移start

kernel image reserve的信息会不会存储在这个early_res里呢?

老样子,打印出来吧!

diff --git a/kernel/early_res.c b/kernel/early_res.c
index
7bfae88..c6a4475 100644
--- a/kernel/early_res.c
+++ b/kernel/early_res.c
@@ -44,6 +44,17 @@ static int __init find_overlapped_early(u64 start, u64 end)
       
return i;
 
}
 
+static void __init early_res_print()
+{
+       int i;
+       struct early_res *r;
+
+       for (i = 0; i < max_early_res && early_res[i].end; i++) {
+               r = &early_res[i];
+               printk(KERN_DEBUG "early_res: %s: %llx-%llx\n", r->name, r->start, r->end);
+       }
+}
+
 
/*
  * Drop the i-th range from the early reservation map,
  * by copying any higher ranges down one over it, and
@@ -290,7 +301,7 @@ void __init reserve_early(u64 start, u64 end, char *name)
 {
        if (start >= end)
                return;
-
+       printk(KERN_DEBUG "early_res: %s: %llx-%llx\n", name, start, end);
        __check_and_double_early_res(start, end);
 
        drop_overlaps_that_are_ok(start, end);
@@ -492,6 +503,7 @@ static inline int __init bad_addr(u64 *addrp, u64 size, u64 align)
        u64 addr = *addrp;
        int changed = 0;
        struct early_res *r;
+       early_res_print();
 again:
        i = find_overlapped_early(addr, addr + size);
        r = &early_res[i];

reserve的时候,打印出来,在调用bad_addr时,打印出early_res

early_res: TEXT DATA BSS: 100000-55d0c4
early_res
: RAMDISK: 377cc000-37ff0000
Linux version 2.6.35.7-default+ (root@lj) (gcc version 4.2.1 (SUSE Linux)) #3 SMP Mon Oct 17 22:43:57 EDT 2011
BIOS
-provided physical RAM map:

看到TEXT DATA BSS了吧!并且打印的位置是在print banner之前,也就是说,是在进入start_kernel之前,对reserve_early函数lookup reference不难发现是在i386_start_kernl()中,进行保留的:

void __init i386_start_kernel(void)
{
***

        reserve_early
(__pa_symbol(&_text), __pa_symbol(&__bss_stop), "TEXT DATA BSS");

好吧,这个问题就到这里。

 

find_e820_area()之前的e820.map:

 wjcdx: 0000000000000000 - 0000000000010000 (reserved)
 wjcdx
: 0000000000010000 - 000000000009f800 (usable)
 wjcdx
: 000000000009f800 - 00000000000a0000 (reserved)
 wjcdx
: 00000000000ca000 - 00000000000cc000 (reserved)
 wjcdx
: 00000000000dc000 - 00000000000e4000 (reserved)
 wjcdx
: 00000000000e8000 - 0000000000100000 (reserved)
 wjcdx
: 0000000000100000 - 000000003fef0000 (usable)
 wjcdx
: 000000003fef0000 - 000000003feff000 (ACPI data)
 wjcdx
: 000000003feff000 - 000000003ff00000 (ACPI NVS)
 wjcdx
: 000000003ff00000 - 0000000040000000 (usable)
 wjcdx
: 00000000fec00000 - 00000000fec10000 (reserved)
 wjcdx
: 00000000fee00000 - 00000000fee01000 (reserved)
 wjcdx
: 00000000fffe0000 - 0000000100000000 (reserved)

 

K8

Bootmem Allocatorbuddy system/slab allocator之前,进行内存管理;

CONFIG_K8_NUMA, K8AMDCPU架构。

 

 

 

NEED DO

两个需要研究的问题:

  1. 进程的页表如何建立,如何变化?
  2. buddy system/slab allocator等如何影响页表,即分配的物理页如何映射到页表中?

 

以下,先研究bootmem/buddy system/slab allocator内存管理机制及高端内存管理机制,再研究上面提出的两个页表问题。

 

In setup_arch():

        /*
         * Parse the ACPI tables for possible boot-time SMP configuration.
         */

        acpi_boot_table_init
(); //解析各ACPI表读取SMP配置

        early_acpi_boot_init
();

#ifdef CONFIG_ACPI_NUMA
       
/*
         * Parse SRAT to discover nodes.
         */

        acpi
= acpi_numa_init(); //读取和解析ACPI SRAT各表,获取NUMA节点及与CPU距离信息;
#endif

#ifdef CONFIG_K8_NUMA
       
if (!acpi)
                k8
= !k8_numa_init(0, max_pfn); //针对AMD K8体系对NUMA做特殊初始化
#endif

        initmem_init
(0, max_pfn, acpi, k8); //初始化bootmem allocator

 

错误的initmem_init中的Bootmem Allocator初始化

initmem_init()有两处定义:numa_32.cinit_32.c,通过

$make arch/x86/kernel/setup.i

发现initmem_init()包含于头文件:arch/x86/include/asm/page_types.h

# 40 "/home/wjcdx/linux/linux-2.6/arch/x86/include/asm/page_types.h" 2




extern int devmem_is_allowed(unsigned long pagenr);

extern unsigned long max_low_pfn_mapped;
extern unsigned long max_pfn_mapped;
 
static inline phys_addr_t get_max_mapped(void)
{
 
return (phys_addr_t)max_pfn_mapped << 12;
}

extern unsigned long init_memory_mapping(unsigned long start,
     
unsigned long end);
   
extern void initmem_init(void);
extern void free_initmem(void);

而包含该头文件的是init_32.c.

 

initmem_init()有两处定义:numa_32.cinit_32.c,通过

$make arch/x86/kernel/setup.i

发现initmem_init()包含于头文件:arch/x86/include/asm/page_types.h

# 40 "/home/wjcdx/linux/linux-2.6/arch/x86/include/asm/page_types.h" 2




extern int devmem_is_allowed(unsigned long pagenr);

extern unsigned long max_low_pfn_mapped;
extern unsigned long max_pfn_mapped;
 
static inline phys_addr_t get_max_mapped(void)
{
 
return (phys_addr_t)max_pfn_mapped << 12;
}

extern unsigned long init_memory_mapping(unsigned long start,
     
unsigned long end);
   
extern void initmem_init(void);
extern void free_initmem(void);

而包含该头文件的是init_32.c.

 

呃,上面一条评论的猜测是错的,不过也是这个的铺垫。

真正的原因在于生成setup.i的时候,使用的是最新版本(3.0+)的代码,而我source insight中看的是2.6.35.7的代码,在最新版本中,这两处都是 void initmem_init(void) :)

 

highstart_pfn = max_low_pfn
highend_pfn
= max_pfn
num_physpages
= highend_pfn;
high_memory
= (void *) __va(highstart_pfn * PAGE_SIZE - 1) + 1;

__vmalloc_start_set
= true;

vmalloc_start = highstart_pfnI think.

 

e820_register_active_regions(0, 0, highend_pfn);
sparse_memory_present_with_active_regions
(0);

e820.map中在0~max_pfn之间类型为E820_RAMmap,加入到early_node_map中。

 

setup_bootmem_allocator();

开始初始化Bootmem Allocator.

 

void __init setup_bootmem_allocator(void)
{
#ifndef CONFIG_NO_BOOTMEM
       
int nodeid;
       
unsigned long bootmap_size, bootmap;
       
/*
         * Initialize the boot-time allocator (with low memory only):
         */
        bootmap_size
= bootmem_bootmap_pages(max_low_pfn)<<PAGE_SHIFT;
        bootmap
= find_e820_area(0, max_pfn_mapped<<PAGE_SHIFT, bootmap_size,
                                 PAGE_SIZE
);
       
if (bootmap == -1L)
                panic
("Cannot find bootmem map of size %ld\n", bootmap_size);
        reserve_early
(bootmap, bootmap + bootmap_size, "BOOTMAP");
#endif

        printk
(KERN_INFO "  mapped low ram: 0 - %08lx\n",
                 max_pfn_mapped
<<PAGE_SHIFT);
        printk
(KERN_INFO "  low ram: 0 - %08lx\n", max_low_pfn<<PAGE_SHIFT);

#ifndef CONFIG_NO_BOOTMEM
        for_each_online_node
(nodeid) {
                 
unsigned long start_pfn, end_pfn;

#ifdef CONFIG_NEED_MULTIPLE_NODES
                start_pfn
= node_start_pfn[nodeid];
                end_pfn
= node_end_pfn[nodeid];
               
if (start_pfn > max_low_pfn)
                       
continue;
               
if (end_pfn > max_low_pfn)
                        end_pfn
= max_low_pfn;
#else
                start_pfn
= 0;
                end_pfn
= max_low_pfn;
#endif
                bootmap
= setup_node_bootmem(nodeid, start_pfn, end_pfn,
                                                 bootmap
);
       
}
#endif

        after_bootmem
= 1;
}

||
\/
in setup_bootmem_allocator:

bootmap
= setup_node_bootmem(nodeid, start_pfn, end_pfn,
                                                 bootmap
);

||
\/
In setup_node_bootmem

bootmap_size
= init_bootmem_node(NODE_DATA(nodeid),
                                         bootmap
>> PAGE_SHIFT,
                                         start_pfn
, end_pfn);
***
       
return bootmap + bootmap_size;

||
\/

return init_bootmem_core(pgdat->bdata, freepfn, startpfn, endpfn);

||
\/
In init_bootmem_core:

bdata
->node_bootmem_map = phys_to_virt(PFN_PHYS(mapstart));
***
mapsize
= bootmap_bytes(end - start);
memset
(bdata->node_bootmem_map, 0xff, mapsize);

关于setup_bootmem_allocator的疑问:

  1. 分配的bootmap不够用的问题:若每个node只有1个页框,则这一个页框要占用四个字节的bootmap空间,如此分配的bootmap必然不够用;
  2. init_bootmem_core函数中,假设第一次设置的为memset(virtaddr_of(page_with_no(15)), 0xff, 8),若bootmap不超过1页,则第二次设置的为memset(virtaddr_of(page_with_no(15)), 0xff, 4),如此设置有何意义呢?
  3. node_start_pfn中用到的node_data,何处初始化?

 

bootmap_bytes static unsigned long __init bootmap_bytes(unsigned long pages)
{
       
unsigned long bytes = (pages + 7) / 8; //除以8所得的商,上取整。(前提是,pages是整数)

       
//bytes + (sizeof(long) - 1) & (~sizeof(long)) bytes4上取整,即将bytes向上补齐为4的倍数。
       
return ALIGN(bytes, sizeof(long));

}

 

支线任务不做过多停留,姑且认为bootmap已初始化完成。

PS: 在最新版(3.0+)的代码中,init_32.c中的setup_bootmem_allocator只有两个printk,却初始化了after_bootmem = 1;,那么接下来如何分配内存呢?Directly use early_pages_start, or bootmem allocator?

 

reserve_bootmem & free_bootmem,设置/清零位图,逻辑不复杂,不再赘述。

可参考:  

 

DUMP 内存状态 In setup_arch:

#ifndef CONFIG_NO_BOOTMEM
        early_res_to_bootmem
(0, max_low_pfn<<PAGE_SHIFT);
#endif

early_res中存储的内存保留状态,复制到Bootmem Allocator中去。

 

zone/buddy system/slab概览 In init_32.c::paging_init():
sparse_init
(); //xxx
zone_sizes_init
(); //分配和初始化struct page——伙伴系统


In start_kernel():
build_all_zonelists
(NULL); //为每个区建立一个zonelist,表示分配页的顺序
***
mm_init
();  
||
\/
/*
 * Set up kernel memory allocators
 */

static void __init mm_init(void)
{
       
/*
         * page_cgroup requires countinous pages as memmap
         * and it's bigger than MAX_ORDER unless SPARSEMEM.
         */

        page_cgroup_init_flatmem
();
        mem_init
(); //bootmem中的所有空闲页面释放到伙伴系统
        kmem_cache_init
(); //初始化slab分配器
        pgtable_cache_init
();
        vmalloc_init
(); //初始化动态映射区
}

参考:http://blog.chinaunix.net/space.php?uid=20543183&do=blog&id=1930810

In start_kernel:
kmem_cache_init_late
();

 

 

由难以分析NUMA引起的initmem_init定义的醒悟

到这里似乎卡住了,再要往下进行遇到了很多NUMA相关的数据结构,但是他们的出现似乎显得很突兀:不是被忽略,而是之前没有碰到过。那他们在哪里被初始化呢?

跟踪启动过程,无果。

跟踪NODE_DATAnuma_32.c,查看该文件中对struct pglist_data亦即pg_data_t相关的的函数,还有最后有些熟悉的initmem_init函数,并且发现,分配pg_data的函数allocate_pgdat也被他调用。然而,由之前的分析,在setup_arch中调用的initmem_init()函数是init_32.c中的。

会不会是之前分析错了呢?

查看initmem_init的定义,initmem_init函数有多处定义,却只有一处声明,那就是page_types.h.

init_32.c显式包含page_types.h,那么numa_32.c有没有可能隐式包含呢?毕竟头文件千丝万缕的包含关系,很难缕清。

$make arch/x86/mm/numa_32.i

发现声明也来自于page_types.h.

再看init_32.cinitmem_init的定义:

#ifndef CONFIG_NEED_MULTIPLE_NODES
void __init initmem_init(unsigned long start_pfn, unsigned long end_pfn,
                               
int acpi, int k8)
{
***
}
#endif /* !CONFIG_NEED_MULTIPLE_NODES */

只有当CONFIG_NEED_MULTIPLE_NODES没定义时才会定义,看来如果定义了CONFIG_NUMAinit_32.c中的initmem_init函数就不会被编译了。

这样就理顺了。

Linux内存管理模型

在查阅资料和配置内核的过程中,发现Linux内存管理模型:FLAT, SPARSE, DISCONTIG(discontiguous).

 

 

看一下numa_32.c中的initmem_init,忽略sparse相关代码,因为他是一个比较新的特性。

 

NUMA中的initmem_init

numa_32.c::initmem_init:

static inline void get_memcfg_numa(void)
{

   
if (get_memcfg_numaq())
       
return;
   
if (get_memcfg_from_srat())
       
return;
    get_memcfg_numa_flat
(); //使用flat model,只有一个node
}

三种方法初始化内存layout. 初始化的全局变量:

node_start_pfn/node_end_pfn,
physnode_map
,
node_remap_size
.

并且调用e820_register_active_regions将发现的内存注册到early_node_map中去。

 

calculate_numa_remap_pages中,为每一个node,在各自的node上分配一个node_remap_size[nid]大小的map

The map is kept near the end physical page range that has already been registered.

get_memcfg_numa中,node_remap_size[nid] = (start_pfn - end_pfn + 1) * sizeof(struct page) 而在calculate_numa_remap_pages,node_remap_size[nid] = node_remap_size[nid] + sizeof(pg_data_t), 并初始化node_remap_offset.

虽然这些node_remap在物理上是不连续的,但是在node_remap_offset中记录的offset是连续的,这就暗示,有可能在后来maplinear address space是连续的,很可能就是kernel virtual address space.

 

numa_32.c::initmem_init():

for_each_online_node(nid) {
                init_remap_allocator
(nid); //初始化node_remap_start_vaddr/node_remap_end_vaddr/node_remap_start_vaddr

                allocate_pgdat
(nid); //初始化NODE_DATA(nid)
       
}
        remap_numa_kva
(); //设置各node remap pfn对应的页目录(PMD)

        printk
(KERN_DEBUG "High memory starts at vaddr %08lx\n",
                       
(ulong) pfn_to_kaddr(highstart_pfn));
        for_each_online_node
(nid)
                propagate_e820_map_node
(nid);

        for_each_online_node
(nid) { //初始化NODE_DATA(nid)
                memset
(NODE_DATA(nid), 0, sizeof(struct pglist_data));
                NODE_DATA
(nid)->node_id = nid;
#ifndef CONFIG_NO_BOOTMEM
                NODE_DATA
(nid)->bdata = &bootmem_node_data[nid];
#endif
       
}

        setup_bootmem_allocator
(); //初始化Bootmem Allocator, 之前提到的三个问题中的前两个,仍然存在

 

numa_32.c::initmem_init():

        kva_pages = roundup(calculate_numa_remap_pages(), PTRS_PER_PTE);

        kva_target_pfn
= round_down(max_low_pfn - kva_pages, PTRS_PER_PTE);

node remapkva映射在低端内存的最顶端。

 

zone_size_init

接下来进入paging_init()->zone_sizes_init().

 

static void __init zone_sizes_init(void)
{
       
unsigned long max_zone_pfns[MAX_NR_ZONES];
        memset
(max_zone_pfns, 0, sizeof(max_zone_pfns));
        max_zone_pfns
[ZONE_DMA] =
                virt_to_phys
((char *)MAX_DMA_ADDRESS) >> PAGE_SHIFT;
        max_zone_pfns
[ZONE_NORMAL] = max_low_pfn;
#ifdef CONFIG_HIGHMEM
        max_zone_pfns
[ZONE_HIGHMEM] = highend_pfn;
#endif

        free_area_init_nodes
(max_zone_pfns);
}

zone_sizes_init()简单的初始化max_zone_pfns,并调入free_area_init_node().

 

struct page结构的分配

node种的页框所需要的struct page描述符在initmem_init中也已经分配了。

这句并不准确,在initmem_init中只是预留空间,也可以说是分配。

 

/**
 * free_area_init_nodes - Initialise all pg_data_t and zone data
 * @max_zone_pfn: an array of max PFNs for each zone
 */

void __init free_area_init_nodes(unsigned long *max_zone_pfn)

该函数现实调用sort_node_map(),对node_map进行排序,然后初始化一些全局变量; find_movable_zones(不予考虑);然后

/* Initialise every node */
        mminit_verify_pageflags_layout
();
        setup_nr_node_ids
(); //设置node id bitmap;
        for_each_online_node
(nid) { //遍历每一个online NUMA node
                pg_data_t
*pgdat = NODE_DATA(nid);
                free_area_init_node
(nid, NULL,
                                find_min_pfn_for_node
(nid), NULL); //初始化NODE nid

               
/* Any memory on that node */
               
if (pgdat->node_present_pages)
                        node_set_state
(nid, N_HIGH_MEMORY);
                check_for_regular_memory
(pgdat);
       
}

 

 

void __paginginit free_area_init_node(int nid, unsigned long *zones_size,
               
unsigned long node_start_pfn, unsigned long *zholes_size)
{
        pg_data_t
*pgdat = NODE_DATA(nid);

        pgdat
->node_id = nid;
        pgdat
->node_start_pfn = node_start_pfn;
        calculate_node_totalpages
(pgdat, zones_size, zholes_size);

        alloc_node_mem_map
(pgdat); //分配NODE中页框所需要的struct page描述符,并初始化pgdat->node_mem_map域,该域的结束地址是MAX_ORDER对齐的;
#ifdef CONFIG_FLAT_NODE_MEM_MAP
        printk
(KERN_DEBUG "free_area_init_node: node %d, pgdat %08lx, node_mem_map %08lx\n",
                nid
, (unsigned long)pgdat,
               
(unsigned long)pgdat->node_mem_map);
#endif

        free_area_init_core
(pgdat, zones_size, zholes_size); //zone, page结构进行初始化
}

 

/*
 * Set up the zone data structures:
 *   - mark all pages reserved
 *   - mark all memory queues empty
 *   - clear the memory bitmaps
 */

static void __paginginit free_area_init_core(struct pglist_data *pgdat,
                unsigned long *zones_size, unsigned long *zholes_size)
{
//初始化pgdata
       
        for (j = 0; j < MAX_NR_ZONES; j++) {
                struct zone *zone = pgdat->node_zones + j;
//zone的一系列初始化
                ret = init_currently_empty_zone(zone, zone_start_pfn,
                                                size, MEMMAP_EARLY); //初始化wait_table, free_list
                BUG_ON(ret);
                memmap_init(size, nid, j, zone_start_pfn); //初始化struct page
                zone_start_pfn += size;
        }
}

memmap_init() => memmap_init_zone():

/*
 * Initially all pages are reserved - free ones are freed
 * up by free_all_bootmem() once the early boot process is
 * done. Non-atomic initialization, single-pass.
 */

void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
               
unsigned long start_pfn, enum memmap_context context)
{
***
       
for (pfn = start_pfn; pfn < end_pfn; pfn++) {
***
                page
= pfn_to_page(pfn);
                set_page_links
(page, zone, nid, pfn);
                mminit_verify_page_links
(page, zone, nid, pfn);
                init_page_count
(page);
                reset_page_mapcount
(page);
               
SetPageReserved(page);
***
       
}
}

 

 

接下来是start_kernel()->build_all_zonelists()build zonelist,逻辑还是很清晰的,只要对zonelist的用法有所了解,理清这里的初始化,还是很轻松的。

 

 

再接下来是start_kernel()->mm_init(), 该函数将bootmem中的页框释放到buddy system中,初始化slab allocator,初始化动态映射区(vmalloc)区域,下面一一概述。

 

slab allocator初始化

mm_init()->mem_init():

void __init mem_init(void)
{
***
       
/* this will put all low memory onto the freelists */
        totalram_pages
+= free_all_bootmem(); //见上面注释

***
        save_pg_dir
(); //保存swapper_pg_dir供系统suspendresume时使用;
        zap_low_mappings
(true); //擦除swapper_pg_dir0~3G的映射,内核只是用3~4G的线性空间;
}

 

free_all_bootmem()也不再细说了。

 

void __init kmem_cache_init(void)
{
       
/* Bootstrap is tricky, because several objects are allocated
         * from caches that do not exist yet:
         * 1) initialize the cache_cache cache: it contains the struct
         *    kmem_cache structures of all caches, except cache_cache itself:
         *    cache_cache is statically allocated.
         *    Initially an __init data area is used for the head array and the
         *    kmem_list3 structures, it's replaced with a kmalloc allocated
         *    array at the end of the bootstrap.
         * 2) Create the first kmalloc cache.
         *    The struct kmem_cache for the new cache is allocated normally.
         *    An __init data area is used for the head array.
         * 3) Create the remaining kmalloc caches, with minimally sized
         *    head arrays.
         * 4) Replace the __init data head arrays for cache_cache and the first
         *    kmalloc cache with kmalloc allocated arrays.
         * 5) Replace the __init data for kmem_list3 for cache_cache and
         *    the other cache's with kmalloc allocated memory.
         * 6) Resize the head arrays of the kmalloc caches to their final sizes.
         */
}
start_kernel
()
||
\/
void __init kmem_cache_init_late(void)
{
       
/* 6) resize the head arrays to their final sizes */
}

slab的结构主要有cache, slab, kobject

FROM ULK3

  • each cache is a "store" of objects of the same type;
  • a cache is devided into slabs, which consists of one or more contiguous page frames that contain both allocated & free objects;

系统静态分配了一个cache_cache(可以顾名思义), 用于存放slab算法所使用的cache数据结构; 还静态分配了array_cachekmem_list3供初始化slab时使用;

该初始化过程,首先在cache_cache中分配array_cachekmem_list3类型的cache, 分配2的指数大小的cache,如此就初始化好了slab本身所要使用的数据结构,接下来使用kmalloc分配array_cachekmem_list3替换静态分配供临时使用的initarray_cacheinitkmem_list3

kmem_cache_init_late()->enable_cpucache()->do_tune_cpucache()->alloc_arraycache()为每个CPU分配array_cache, 因为在kmem_cache_init只分配了第一个CPUarray_cache.

 

vmalloc_init void __init vmalloc_init(void)
{

       
/* Import existing vmlist entries. */
       
for (tmp = vmlist; tmp; tmp = tmp->next) {
                va
= kzalloc(sizeof(struct vmap_area), GFP_NOWAIT);
                va
->flags = tmp->flags | VM_VM_AREA;
                va
->va_start = (unsigned long)tmp->addr;
                va
->va_end = va->va_start + tmp->size;
                __insert_vmap_area
(va);
       
}

        vmap_area_pcpu_hole
= VMALLOC_END;

        vmap_initialized
= true;
}

在页表初始化的分析中,始终找不到初始化动态映射区部分线性空间页表项的代码,这里终于碰到了明显的vmalloc_init,但是看到这个函数,未免有些后怕, 这个函数很明显是要遍历全局变量vmlist中的vmap_area结构,那么是在什么时候向vmlist中添加vmap_area的呢?是遗漏了什么吗?

经过仔细查找,确实没有遗漏,想要确实证明这个,只有编译升级内核,查看运行时的状态了。

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index ae00746
..be89c7f 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1087,6 +1087,7 @@ void __init vm_area_register_early(struct vm_struct *vm, size_t align)
 
 
void __init vmalloc_init(void)
 
{
+       printk(KERN_DEBUG "### vmalloc_init");
       
struct vmap_area *va;
       
struct vm_struct *tmp;
       
int i;
@@ -1105,6 +1106,7 @@ void __init vmalloc_init(void)
                va
->flags = tmp->flags | VM_VM_AREA;
                va
->va_start = (unsigned long)tmp->addr;
                va
->va_end = va->va_start + tmp->size;
+               printk(KERN_DEBUG "vmalloc_init: %lx-%lx", va->va_start, va->va_end);
                __insert_vmap_area
(va);
       
}
 
@@ -1197,6 +1199,7 @@ struct vm_struct *vmlist;
 
static void insert_vmalloc_vm(struct vm_struct *vm, struct vmap_area *va,
                             
unsigned long flags, void *caller)
 
{
+       printk(KERN_DEBUG "insert_vmalloc_vm: %lx-%lx", va->va_start, va->va_end);
       
struct vm_struct *tmp, **p;
 
        vm
->flags = flags;

查看打印信息,vmlist确实是空的:

### vmalloc_init
Hierarchical RCU implementation.
       
Verbose stalled-CPUs detection is disabled.
NR_IRQS
:512
Extended CMOS year: 2000
Console: colour dummy device 80x25

 

内存分配机制小结

ULK3::8.2.11. Local Caches of Free Slab Objects

The Linux 2.6 implementation of the slab allocator for multiprocessor systems differs from that of the original Solaris 2.4. To reduce spin lock contention among processors and to make better use of the hardware caches, each cache of the slab allocator includes a per-CPU data structure consisting of a small array of pointers to freed objects called the slab local cache. Most allocations and releases of slab objects affect the local cache only; the slab data structures get involved only when the local cache underflows or overflows.

 

各种内存分配使用的方法,都很容易理解,不在多说,这里大体总结一下各个分配机制之间的关系:

kmalloc调用slab allocatorslab中分配数据结构,slab会首先从Local cache中分配,如果local cache中没有,则会分配新的slab,分配slab时,会调用zone allocator分配内存页,每个zone一个buddy system, zone中的内存页又有可能分布在不同的NUMA node上。

当然alloc_pages族可以直接调用zone allocator.

 

进程的页表

进程的struct mm结构中,有一个域pgd_t * pgd指向其页表;

进程中的各线程共用内存空间等资源;

进程在创建新进程时会dump父进程的内存页表,所有进程都是init进程的子进程,init进程的内存页表,就是系统启动是分配并初始化的页表。

 

 

阅读(1806) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~