Chinaunix首页 | 论坛 | 博客
  • 博客访问: 3571907
  • 博文数量: 205
  • 博客积分: 0
  • 博客等级: 民兵
  • 技术积分: 7385
  • 用 户 组: 普通用户
  • 注册时间: 2013-01-23 18:56
个人简介

将晦涩难懂的技术讲的通俗易懂

文章分类

全部博文(205)

文章存档

2024年(8)

2023年(9)

2022年(4)

2021年(12)

2020年(8)

2019年(18)

2018年(19)

2017年(9)

2016年(26)

2015年(18)

2014年(54)

2013年(20)

分类: LINUX

2017-04-30 17:23:56

dpdk内存管理——内存初始化

———lvyilong316(转载请注明出处
说明:本系列博文源代码均来自dpdk17.02

1.1内存初始化

1.1.1 hugepage技术

hugepage(2M/1G..)相对于普通的page(4K)来说有几个特点:

(1) hugepage 这种页面不受虚拟内存管理影响,不会被替换(swap)出内存,而普通的4kpage 如果物理内存不够可能会被虚拟内存管理模块替换到交换区。

(2) 同样的内存大小,hugepage产生的页表项数目远少于4kpage.

举一个例子,用户进程需要使用 4M 大小的内存,如果采用4Kpage, 需要1K的页表项存放虚拟地址到物理地址的映射关系,而采用hugepage 2M 只需要产生2条页表项,这样会带来两个后果,一是使用hugepage的内存产生的页表比较少,这对于数据库系统等动不动就需要映射非常大的数据到进程的应用来说,页表的开销是很可观的,所以很多数据库系统都采用hugepage技术。二是tlb冲突率大大减少,tlb 驻留在cpu1cache里,是芯片访问最快的缓存,一般只能容纳100多条页表项,如果采用hugepage,则可以极大减少

tlb miss 导致的开销:tlb命中,立即就获取到物理地址,如果不命中,需要查 rc3->进程页目录表pgd->进程页中间表pmd->进程页框->物理内存,如果这中间pmd或者页框被虚拟内存系统替换到交互区,则还需要交互区load回内存。。总之,tlb miss是性能大杀手,而采用hugepage可以有效降低tlb miss 

    linux 使用hugepage的方式比较简单,以2Mhugepage为例:

1. /sys/kernel/mm/hugepages/hugepages-2048kB/  通过修改这个目录下的文件可以修改hugepage页面的大小和总数目;

2. mount -t hugetlbfs nodev /mnt/huge linuxhugepage实现为一种文件系统hugetlbfs,需要将该文件系统mount到某个文件;

3. mmap /mnt/huge   在用户进程里通过mmap 映射hugetlbfs mount 的目标文件,这个mmap返回的地址就是大页面的了。

 

1.1.2 多进程共享

    mmap 系统调用可以设置为共享的映射,dpdk的内存共享就依赖于此,在这多个进程中,分为两种角色,第一种是主进程(RTE_PROC_PRIMARY),第二种是从进程(RTE_PROC_SECONDARY)。主进程只有一个,必须在从进程之前启动,负责执行DPDK库环境的初始化,从进程attach到主进程初始化的DPDK上,主进程先mmap hugetlbfs 文件,构建内存管理相关结构将这些结构存入hugetlbfs 上的配置文件rte_config,然后其他进程mmap rte_config文件,获取内存管理结构,dpdk采用了一定的技巧,使得最终同样的共享物理内存在不同进程内部对应的虚拟地址是完全一样的,意味着一个进程内部的基于dpdk的共享数据和指向这些共享数据的指针,可以在不同进程间通用。

 

1.1.3 相关数据结构

l  rte_config

内存全局配置结构。


1)         rte_config 是每个程序私有的数据结构,这些东西都是每个程序的私有配置。

2)         lcore_role:这个DPDK程序使用-c参数设置的它同时跑在哪几个核上。

3)         master_lcoreDPDK的架构上,每个程序分配的lcore_role 有一个主核,对使用者来说影响不大。

4)         lcore_count:这个程序可以使用的核数。

5)         process_typeDPDK多进程:一个程序是主程序,否则初始化DPDK内存表,其他从程序使用这个表。RTE_PROC_PRIMARY/RTE_PROC_SECONDARY

6)         mem_config:指向设备各个DPDK程序共享的内存配置结构,这个结构被mmap到文件/var/run/.rte_config,通过这个方式多进程实现对mem_config结构的共享

 

l  hugepage_file

这个是struct hugepage数组,每个struct hugepage_file 都代表一个hugepage 页面,存储的每个页面的物理地址和程序的虚拟地址的映射关系。然后,把整个数组映射到文件/var/run /. rte_hugepage_info,同样这个文件也是设备共享的,主/从进程都能访问它。

1)         file_id: 每个文件在hugepage 文件系统中都有一个编号,就是数组1-N;

2)         filepath%s/%smap_%file_id       mount hugepage文件系统中的文件路径名;

3)         size: 这个hugepage页面的size2M还是1G;

4)         socket_id:这个页面属于那个CPU socket

5)         Physaddr:这个hugepage 页面的物理地址

6)         orig_va:它和final_va一样都是指这个huagepage页面的虚拟地址。这个地址是主程序初始化huagepage用的,后来就没用了。

7)         final_va:这个最终这个页面映射到主/从程序中的虚拟地址。

 

    首先因为整个数组都映射到文件里面,所有的程序之间都是共享的。主程序负责初始化这个数组,首先在它内部通过mmap把所有的hugepage物理页面都映射到虚存空间里面,然后把这种映射关系保存到这个文件里面。从程序启动的时候,读取这个文件,然后在它内存也创建和它一模一样的映射,这样的话,整个DPDK管理的内存在所有的程序里面都是可见,而且地址都一样。

在对各个页面的物理地址份配虚拟地址时,DPDK尽可能把物理地址连续的页面分配连续的虚存地址上,这个东西还是比较有用的,因为CPU/cache/内存控制器的等等看到的都是物理内存,我们在访问内存时,如果物理地址连续的话,性能会高一些。至于到底哪些地址是连续的,那些不是连续的,DPDK在这个结构之上又有一个新的结构rte_mem_config. memseg来管理。因为rte_mem_config也映射到文件里面,所有的程序都可见rte_mem_config. memseg结构。

l  rte_mem_config

    这个数据结构mmap 到文件/var/run /.rte_config中,主/从进程通过这个文件访问实现对这个数据结构的共享。在每个程序内,使用rte_config .mem_config 访问这个结构。

l  rte_memseg

memseg 数组是维护物理地址的,在上面讲到struct hugepage结构对每个hugepage物理页面都存储了它在程序里面的虚存地址。memseg 数组的作用是将物理地址、虚拟地址都连续的hugepage,并且都在同一个socketpagesize 也相同的hugepage页面集合,把它们都划在一个memseg结构里面,这样做的好处就是优化内存。

rte_memseg这个结构也很简单:

1)         phys_addr:这个memseg的包含的所有的hugepage页面的起始物理地址;

2)         addr:这些hugepage页面的起始的虚存地址;

3)         len:这个memseg的包含的空间size

4)         hugepage_sz 这些页面的size 2M /1G?

这些信息都是从hugepage页表数组里面获得的。

1.2 dpdk 内存初始化源码解析

l  rte_eal_init

这个函数是dpdk 运行环境初始化入口函数。

    整个内存初始化的代码流程如上图所示,下面我们逐个分析。


l  eal_hugepage_info_init

这个函数较为简单,主要是遍历系统的/sys/kernel/mm/hugepages目录建立对应的数据结构。系统支持的每种sizehugepage类型在/sys/kernel/mm/hugepages目录下都对应一个子目录。例如系统支持2M1G的大页,就会有对应内目录如下图所示:

其中每个目录就会对应一个struct hugepage_info的结构,其结构如下,记录着对应目录下的信息。那么目录下都有什么信息呢?如下图所示:

所以struct hugepage_info中也是记录的这些信息,包括当前sizehugepage页面总个数(nr_hugepages),已经还没有被分配的个数(free_hugepages)等。所有struct hugepage_info构成一个数组,保存在struct internal_config结构中。

源码如下:

点击(此处)折叠或打开

  1. int
  2. eal_hugepage_info_init(void)
  3. {
  4.          const char dirent_start_text[] = "hugepages-";
  5.          const size_t dirent_start_len = sizeof(dirent_start_text) - 1;
  6.          unsigned i, num_sizes = 0;
  7.          DIR *dir;
  8.          struct dirent *dirent;
  9.  
  10.          dir = opendir(sys_dir_path); /* /sys/kernel/mm/hugepages */
  11.          if (dir == NULL)
  12.                   rte_panic("Cannot open directory %s to read system hugepage "
  13.                              "info\n", sys_dir_path);
  14.  
  15.          for (dirent = readdir(dir); dirent != NULL; dirent = readdir(dir)) {
  16.                   struct hugepage_info *hpi;
  17.  
  18.                   if (strncmp(dirent->d_name, dirent_start_text,
  19.                                dirent_start_len) != 0)
  20.                            continue;
  21.  
  22.                   if (num_sizes >= MAX_HUGEPAGE_SIZES)
  23.                            break;
  24.  
  25.                   hpi = &internal_config.hugepage_info[num_sizes];
  26.                   hpi->hugepage_sz =
  27.                            rte_str_to_size(&dirent->d_name[dirent_start_len]);
  28.                   hpi->hugedir = get_hugepage_dir(hpi->hugepage_sz);
  29.  
  30.                   /* first, check if we have a mountpoint */
  31.                   if (hpi->hugedir == NULL) {
  32.                            uint32_t num_pages;
  33.  
  34.                            num_pages = get_num_hugepages(dirent->d_name);
  35.                            if (num_pages > 0)
  36.                                     RTE_LOG(NOTICE, EAL,
  37.                                              "%" PRIu32 " hugepages of size "
  38.                                              "%" PRIu64 " reserved, but no mounted "
  39.                                              "hugetlbfs found for that size\n",
  40.                                              num_pages, hpi->hugepage_sz);
  41.                            continue;
  42.                   }
  43.  
  44.                   /* try to obtain a writelock */
  45.                   hpi->lock_descriptor = open(hpi->hugedir, O_RDONLY);
  46.  
  47.                   /* if blocking lock failed */
  48.                   if (flock(hpi->lock_descriptor, LOCK_EX) == -1) {
  49.                            RTE_LOG(CRIT, EAL,
  50.                                     "Failed to lock hugepage directory!\n");
  51.                            break;
  52.                   }
  53.                   /* clear out the hugepages dir from unused pages */
  54.                   if (clear_hugedir(hpi->hugedir) == -1)
  55.                            break;
  56.  
  57.                   /* for now, put all pages into socket 0,
  58.                    * later they will be sorted */
  59.                   /* 这里还没有按socket统计页数,将内存页数直接记录到hupage_info的num_pages[0]里面了 */
  60.                   hpi->num_pages[0] = get_num_hugepages(dirent->d_name);
  61.  
  62. #ifndef RTE_ARCH_64
  63.                   /* for 32-bit systems, limit number of hugepages to
  64.                    * 1GB per page size */
  65.                   hpi->num_pages[0] = RTE_MIN(hpi->num_pages[0],
  66.                                                  RTE_PGSIZE_1G / hpi->hugepage_sz);
  67. #endif
  68.  
  69.                   num_sizes++;
  70.          }
  71.          closedir(dir);
  72.  
  73.          /* something went wrong, and we broke from the for loop above */
  74.          if (dirent != NULL)
  75.                   return -1;
  76.  
  77.          internal_config.num_hugepage_sizes = num_sizes;
  78.  
  79.          /* sort the page directory entries by size, largest to smallest */
  80.          qsort(&internal_config.hugepage_info[0], num_sizes,
  81.                sizeof(internal_config.hugepage_info[0]), compare_hpi);
  82.  
  83.          /* now we have all info, check we have at least one valid size */
  84.          for (i = 0; i < num_sizes; i++)
  85.                   if (internal_config.hugepage_info[i].hugedir != NULL &&
  86.                       internal_config.hugepage_info[i].num_pages[0] > 0)
  87.                            return 0;
  88.  
  89.          /* no valid hugepage mounts available, return error */
  90.          return -1;
  91. }

每一类所有内存页,也分处在哪个 socket上(不明白的查看NUMA相关知识补齐)的,hugepage_info中统计内存页数会按属于处在哪个socket上进行统计,但在这一步(eal_hugepage_info_init)中,还区分不了每个页处在哪个socket上,因此这里还没有按socket统计页数,所以就暂时将内存页数直接记录到hupage_infonum_pages[0]里面了。

这里有一个特别注意的点就是get_num_hugepages获取的页面数量是怎么计算来的。这里就不将这个函数展开了,根据其内部实现,其返回的页面个数为“free_hugepages-resv_hugepages”。也就是说,这里获取的是整个系统的可用hugepage页面数。所以后面进行mmap时也是用的这个值,就是会对整个系统的可用页面数进行mmap。这就有个问题,我们知道dpdk进程启动会传入一个指定的内存大小参数,dpdk进程完全只需要分配及mmap这个内存大小就可以了,为什么还要mmap整个系统的的页面呢?这是为了在整个系统层面最大限度找到连续的物理内存,dpdk进程需要尽可能使用连续的内存来提高性能,当然多余的mmap内存会被dpdk unmmap掉,这个见后文分析。

 

l  rte_config_init


点击(此处)折叠或打开

  1. static void
  2. rte_config_init(void)
  3. {
  4.          rte_config.process_type = internal_config.process_type;
  5.  
  6.          switch (rte_config.process_type){
  7.          case RTE_PROC_PRIMARY:
  8.                   rte_eal_config_create();
  9.                   break;
  10.          case RTE_PROC_SECONDARY:
  11.                   rte_eal_config_attach();
  12.                   rte_eal_mcfg_wait_complete(rte_config.mem_config);
  13.                   rte_eal_config_reattach();
  14.                   break;
  15.          case RTE_PROC_AUTO:
  16.          case RTE_PROC_INVALID:
  17.                   rte_panic("Invalid process type\n");
  18.          }
  19. }


DPDK多进程状态下,分为RTE_PROC_PRIMARY进程及RTE_PROC_SECONDARY进程,RTE_PROC_PRIMARY负责初始化内存,RTE_PROC_SECONDARY获取 RTE_PROC_PRIMARY 内存映射的信息,创建与RTE_PROC_PRIMARY一样的内存映射。这是DPDK多进程共享内存的方式。此处先不展开描述。随着流程的展开,自然会明白。

l  rte_eal_config_create

    创建struct rte_mem_config结构,并mmap 到文件/var/run /.rte_config中。


点击(此处)折叠或打开

  1. static void
  2. rte_eal_config_create(void)
  3. {
  4.          void *rte_mem_cfg_addr;
  5.          int retval;
  6.  
  7.          const char *pathname = eal_runtime_config_path(); /*/var/run*/
  8.  
  9.          if (internal_config.no_shconf)
  10.                   return;
  11.  
  12.          /* map the config before hugepage address so that we don't waste a page */
  13.          if (internal_config.base_virtaddr != 0)
  14.                   rte_mem_cfg_addr = (void *)
  15.                            RTE_ALIGN_FLOOR(internal_config.base_virtaddr -
  16.                            sizeof(struct rte_mem_config), sysconf(_SC_PAGE_SIZE));
  17.          else
  18.                   rte_mem_cfg_addr = NULL;
  19.  
  20.          if (mem_cfg_fd < 0){
  21.                   mem_cfg_fd = open(pathname, O_RDWR | O_CREAT, 0660);
  22.                   if (mem_cfg_fd < 0)
  23.                            rte_panic("Cannot open '%s' for rte_mem_config\n", pathname);
  24.          }
  25.     /*使用mmap分配内存,一般都是先open一个文件,然后调用ftruncate进行文件大小设置,最后进行mmap*/
  26.          retval = ftruncate(mem_cfg_fd, sizeof(*rte_config.mem_config));
  27.          if (retval < 0){
  28.                   close(mem_cfg_fd);
  29.                   rte_panic("Cannot resize '%s' for rte_mem_config\n", pathname);
  30.          }
  31.  
  32.          retval = fcntl(mem_cfg_fd, F_SETLK, &wr_lock);
  33.          if (retval < 0){
  34.                   close(mem_cfg_fd);
  35.                   rte_exit(EXIT_FAILURE, "Cannot create lock on '%s'. Is another primary "
  36.                                     "process running?\n", pathname);
  37.          }
  38.  
  39.          rte_mem_cfg_addr = mmap(rte_mem_cfg_addr, sizeof(*rte_config.mem_config),
  40.                                     PROT_READ | PROT_WRITE, MAP_SHARED, mem_cfg_fd, 0);
  41.  
  42.          if (rte_mem_cfg_addr == MAP_FAILED){
  43.                   rte_panic("Cannot mmap memory for rte_config\n");
  44.          }
  45.          memcpy(rte_mem_cfg_addr, &early_mem_config, sizeof(early_mem_config));
  46.          rte_config.mem_config = (struct rte_mem_config *) rte_mem_cfg_addr;
  47.  
  48.          /* store address of the config in the config itself so that secondary
  49.           * processes could later map the config into this exact location */
  50.          rte_config.mem_config->mem_cfg_addr = (uintptr_t) rte_mem_cfg_addr;
  51.  
  52. }



l  rte_eal_memory_init

根据进程是否为RTE_PROC_PRIMARY分别调用rte_eal_hugepage_initrte_eal_hugepage_attach函数。

l  rte_eal_hugepage_init

这个函数是内存初始化的重点,整体流程如下:

我们分段分析。

点击(此处)折叠或打开

  1. int
  2. rte_eal_hugepage_init(void)
  3. {
  4.          struct rte_mem_config *mcfg;
  5.          struct hugepage_file *hugepage = NULL, *tmp_hp = NULL;
  6.          struct hugepage_info used_hp[MAX_HUGEPAGE_SIZES];
  7.  
  8.          uint64_t memory[RTE_MAX_NUMA_NODES];
  9.  
  10.          unsigned hp_offset;
  11.          int i, j, new_memseg;
  12.          int nr_hugefiles, nr_hugepages = 0;
  13.          void *addr;
  14.  
  15.          test_proc_pagemap_readable();
  16.  
  17.          memset(used_hp, 0, sizeof(used_hp));
  18.  
  19.          /*获取rte_config->mem_config */
  20.          mcfg = rte_eal_get_configuration()->mem_config;
  21.          /* hugetlbfs can be disabled */
  22.          if (internal_config.no_hugetlbfs) {
  23.                  addr = mmap(NULL, internal_config.memory, PROT_READ | PROT_WRITE,
  24.                            MAP_PRIVATE | MAP_ANONYMOUS, 0, 0);
  25.                       if (addr == MAP_FAILED) {
  26.                                RTE_LOG(ERR, EAL, "%s: mmap() failed: %s\n", __func__,
  27.                                              strerror(errno));
  28.                                return -1;
  29.                       }
  30.               mcfg->memseg[0].phys_addr = (phys_addr_t)(uintptr_t)addr;
  31.               mcfg->memseg[0].addr = addr;
  32.               mcfg->memseg[0].hugepage_sz = RTE_PGSIZE_4K;
  33.               mcfg->memseg[0].len = internal_config.memory;
  34.               mcfg->memseg[0].socket_id = 0;
  35.               return 0;
  36.          }

函数一开始,将rte_config_init函数获取的配置结构放到本地变量 mcfg 上,然后检查系统是否开启hugetlbfs,如果不开启,则直接通过系统的malloc函数申请配置需要的内存,然后跳出这个函数。

点击(此处)折叠或打开

  1. /* calculate total number of hugepages available. at this point we haven't
  2.           * yet started sorting them so they all are on socket 0 */
  3.          for (i = 0; i < (int) internal_config.num_hugepage_sizes; i++) {
  4.                   /* meanwhile, also initialize used_hp hugepage sizes in used_hp */
  5.                   used_hp[i].hugepage_sz = internal_config.hugepage_info[i].hugepage_sz;
  6.                   nr_hugepages += internal_config.hugepage_info[i].num_pages[0];
  7.          }
  8.          /*
  9.           * allocate a memory area for hugepage table.
  10.           * this isn't shared memory yet. due to the fact that we need some
  11.           * processing done on these pages, shared memory will be created
  12.           * at a later stage.
  13.           */
  14.     /*注意这里分配内存使用的不是hugepage,后面会将这些结构拷贝到hugepage,这里内存会被释放*/
  15.          tmp_hp = malloc(nr_hugepages * sizeof(struct hugepage_file));
  16.          if (tmp_hp == NULL)
  17.                   goto fail;
  18.  
  19.          memset(tmp_hp, 0, nr_hugepages * sizeof(struct hugepage_file));
  20.  
  21.          hp_offset = 0; /* where we start the current page size entries */
  22.  
  23.          huge_register_sigbus();

计算系统中的hugepage个数,存放在nr_hugepages中。然后分配struct hugepage_file的数组,每个结构对应一个hugepage页面信息。注意这个数组的内存还不在hugepage的共享内存中,而是普通的进程私有内存。

点击(此处)折叠或打开

  1. /* map all hugepages and sort them */
  2.          /*遍历每种size的hugepage,如1G,2M的一页*/
  3.          for (i = 0; i < (int)internal_config.num_hugepage_sizes; i ++){
  4.                   unsigned pages_old, pages_new;
  5.                   struct hugepage_info *hpi;
  6.  
  7.                   /*
  8.                    * we don't yet mark hugepages as used at this stage, so
  9.                    * we just map all hugepages available to the system
  10.                    * all hugepages are still located on socket 0
  11.                    */
  12.                   hpi = &internal_config.hugepage_info[i];
  13.  
  14.                   if (hpi->num_pages[0] == 0)/*如果这种size的hugepage的个数为0,则跳过*/
  15.                            continue;
  16.  
  17.                   /* map all hugepages available */
  18.                   pages_old = hpi->num_pages[0];
  19.                   /*为每个hugepage创建文件并mmap,相关hugepage信息保存在tmp_hp[hp_offset]开启的数组*/
  20.                   pages_new = map_all_hugepages(&tmp_hp[hp_offset], hpi, 1);
  21.                   if (pages_new < pages_old) {/*说明有写页面没有mmap成功,可能中间有其他进程占用了*/
  22.                            RTE_LOG(DEBUG, EAL,
  23.                                     "%d not %d hugepages of size %u MB allocated\n",
  24.                                     pages_new, pages_old,
  25.                                     (unsigned)(hpi->hugepage_sz / 0x100000));
  26.  
  27.                            int pages = pages_old - pages_new;
  28.  
  29.                            nr_hugepages -= pages;
  30.                            hpi->num_pages[0] = pages_new; /*更新可用页面为mmap成功的页面*/
  31.                            if (pages_new == 0)
  32.                                     continue;
  33.                   }
  34.         /*查询每个hugepage起始的物理地址,记录在hugepage_file.physaddr*/
  35.                   /* find physical addresses and sockets for each hugepage */
  36.                   if (find_physaddrs(&tmp_hp[hp_offset], hpi) < 0){
  37.                            RTE_LOG(DEBUG, EAL, "Failed to find phys addr for %u MB pages\n",
  38.                                              (unsigned)(hpi->hugepage_sz / 0x100000));
  39.                            goto fail;
  40.                   }
  41.         /*查询每个hugepage所在的socket,记录在hugepage_file.socket_id*/
  42.                   if (find_numasocket(&tmp_hp[hp_offset], hpi) < 0){
  43.                            RTE_LOG(DEBUG, EAL, "Failed to find NUMA socket for %u MB pages\n",
  44.                                              (unsigned)(hpi->hugepage_sz / 0x100000));
  45.                            goto fail;
  46.                   }
  47.         /*根据物理地址对tmp_hp中每个hugepage进行排序*/
  48.                   qsort(&tmp_hp[hp_offset], hpi->num_pages[0],
  49.                         sizeof(struct hugepage_file), cmp_physaddr);
  50.  
  51.                   /* remap all hugepages */
  52.                   /*对排序好的hugepage再次进行mmap*/
  53.                   if (map_all_hugepages(&tmp_hp[hp_offset], hpi, 0) !=
  54.                       hpi->num_pages[0]) {
  55.                            RTE_LOG(ERR, EAL, "Failed to remap %u MB pages\n",
  56.                                              (unsigned)(hpi->hugepage_sz / 0x100000));
  57.                            goto fail;
  58.                   }
  59.         /*解除第一次mmap关系*/
  60.                   /* unmap original mappings */
  61.                   if (unmap_all_hugepages_orig(&tmp_hp[hp_offset], hpi) < 0)
  62.                            goto fail;
  63.  
  64.                   /* we have processed a num of hugepages of this size, so inc offset */
  65.                   hp_offset += hpi->num_pages[0]; /*更新hp_offset,每次循环处理一个size的所有hugepage*/
  66.          }

    构建hugepage 结构数组分下面几步:

(1) 循环遍历系统所有的hugetlbfs 文件系统,一般来说,一个系统只会使用一种hugetlbfs ,所以这一层的循环可以认为没有作用,一种 hugetlbfs 文件系统对应的基础数据包括:页面大小,比如2M,页面数目,比如2K个页面;

(2) 其次,将特定的hugetlbfs的全部页面映射到本进程,放到本进程的 hugepage 数组管理,这个过程主要由 map_all_hugepages函数完成,第一次映射的虚拟地址存放在 hugepage结构的 orig_va变量;

(3) 遍历hugepage数组,找到每个虚拟地址对应的物理地址和所属的物理cpu,将这些信息也记入 hugepage数组,物理地址记录在hugepage结构的phyaddr变量,物理cpu号记录在 hugepage结构的socket_id变量;

(4) 跟据物理地址大小对hugepage数组做排序;

(5) 根据排序结果重新映射,这个也是由函数 map_all_hugepages完成,重新映射后的虚拟地址存放在hugepage结构的final_va变量;

(6) 将第一次映射关系解除,即将orig_va 变量对应的虚拟地址空间返回给内核。

     下面看 map_all_hugepages的实现过程。

l   map_all_hugepages

点击(此处)折叠或打开

  1. static unsigned
  2. map_all_hugepages(struct hugepage_file *hugepg_tbl,
  3.                   struct hugepage_info *hpi, int orig)
  4. {
  5.          int fd;
  6.          unsigned i;
  7.          void *virtaddr;
  8.          void *vma_addr = NULL;
  9.          size_t vma_len = 0;
  10.     /*遍历每个hugepage页面*/
  11.          for (i = 0; i < hpi->num_pages[0]; i++) {
  12.                   uint64_t hugepage_sz = hpi->hugepage_sz;
  13.  
  14.                   if (orig) { /*如果是第一次调用这个函数*/
  15.                            hugepg_tbl[i].file_id = i;/*hugepage页面的编号*/
  16.                            hugepg_tbl[i].size = hugepage_sz;
  17.                            eal_get_hugefile_path(hugepg_tbl[i].filepath,
  18.                                              sizeof(hugepg_tbl[i].filepath), hpi->hugedir,
  19.                                              hugepg_tbl[i].file_id); /*构造hugepage对应的磁盘文件名称,如:/mnt/huge/retmap_0*/
  20.                            hugepg_tbl[i].filepath[sizeof(hugepg_tbl[i].filepath) - 1] = '\0';
  21.                   }
  22.                   else if (vma_len == 0) {/*第二次映射调用,且第一次进入循环*/
  23.                            unsigned j, num_pages;
  24.  
  25.                            /* reserve a virtual area for next contiguous
  26.                             * physical block: count the number of
  27.                             * contiguous physical pages. */
  28.                             /*遍历hugepage页面,找物理内存最大的连续区间*/
  29.                            for (j = i+1; j < hpi->num_pages[0] ; j++) {
  30.                                     if (hugepg_tbl[j].physaddr !=
  31.                                         hugepg_tbl[j-1].physaddr + hugepage_sz)
  32.                                              break;
  33.                            }/*所有的已分配物理页未必连续,这里只是找最大的连续物理内存区间*/
  34.                            num_pages = j - i; /*连续页面的个数*/
  35.                            vma_len = num_pages * hugepage_sz; /*连续页面的大小*/
  36.  
  37.                            /* get the biggest virtual memory area up to
  38.                             * vma_len. If it fails, vma_addr is NULL, so
  39.                             * let the kernel provide the address. */
  40.                            vma_addr = get_virtual_area(&vma_len, hpi->hugepage_sz); /*申请和连续物理内存同样大小的连续虚拟地址空间*/
  41.                            if (vma_addr == NULL)
  42.                                     vma_len = hugepage_sz;
  43.                   }
  44.  
  45.                   /* try to create hugepage file */
  46.                   fd = open(hugepg_tbl[i].filepath, O_CREAT | O_RDWR, 0600);
  47.                   if (fd < 0) {
  48.                            RTE_LOG(DEBUG, EAL, "%s(): open failed: %s\n", __func__,
  49.                                              strerror(errno));
  50.                            return i;
  51.                   }
  52.  
  53.                   /* map the segment, and populate page tables,
  54.                    * the kernel fills this segment with zeros */
  55.                    /*第一次mmap时vma_addr为NULL,内核会自动选取mmap虚拟地址,第二次vma_addr是计算出来的*/
  56.                   virtaddr = mmap(vma_addr, hugepage_sz, PROT_READ | PROT_WRITE,
  57.                                     MAP_SHARED | MAP_POPULATE, fd, 0);
  58.                   if (virtaddr == MAP_FAILED) {
  59.                            RTE_LOG(DEBUG, EAL, "%s(): mmap failed: %s\n", __func__,
  60.                                              strerror(errno));
  61.                            close(fd);
  62.                            return i;
  63.                   }
  64.  
  65.                   if (orig) {/*如果是第一次映射,映射虚拟地址保存在orig_va*/
  66.                            hugepg_tbl[i].orig_va = virtaddr;
  67.                   }
  68.                   else { /*第二次映射,映射虚拟地址保存在final_va*/
  69.                            hugepg_tbl[i].final_va = virtaddr;
  70.                   }
  71.  
  72.                   if (orig) {
  73.                            /* In linux, hugetlb limitations, like cgroup, are
  74.                             * enforced at fault time instead of mmap(), even
  75.                             * with the option of MAP_POPULATE. Kernel will send
  76.                             * a SIGBUS signal. To avoid to be killed, save stack
  77.                             * environment here, if SIGBUS happens, we can jump
  78.                             * back here.
  79.                             */
  80.                            if (huge_wrap_sigsetjmp()) {
  81.                                     RTE_LOG(DEBUG, EAL, "SIGBUS: Cannot mmap more "
  82.                                              "hugepages of size %u MB\n",
  83.                                              (unsigned)(hugepage_sz / 0x100000));
  84.                                     munmap(virtaddr, hugepage_sz);
  85.                                     close(fd);
  86.                                     unlink(hugepg_tbl[i].filepath);
  87.                                     return i;
  88.                            }
  89.                            *(int *)virtaddr = 0;
  90.                   }
  91.  
  92.  
  93.                   /* set shared flock on the file. */
  94.                   if (flock(fd, LOCK_SH | LOCK_NB) == -1) {
  95.                            RTE_LOG(DEBUG, EAL, "%s(): Locking file failed:%s \n",
  96.                                     __func__, strerror(errno));
  97.                            close(fd);
  98.                            return i;
  99.                   }
  100.  
  101.                   close(fd);
  102.  
  103.                   vma_addr = (char *)vma_addr + hugepage_sz;
  104.                   vma_len -= hugepage_sz;
  105.          }
  106.  
  107.          return i;
  108. }

这个函数是复用的,共有两次调用。对于第一次调用,就是根据hugetlbfs 文件系统的页面数m,构造m个文件名称并创建文件,每个文件对应一个大页面,然后通过mmap系统调用映射到进程的一块虚拟地址空间,并将虚拟地址存放在hugepage结构的orig_va地址上。如果该hugetlbfs1K个页面,最终会在hugetlbfs 挂载的目录上生成 1K 个文件,这1K 个文件mmap到进程的虚拟地址由进程内部的hugepage数组维护对于第二次调用,由于hugepage数组已经基于物理地址排序,这些有序的物理地址可能有2种情况,一种是连续的,另一种是不连续的,这时候的调用会遍历这个hugepage数组,然后统计连续物理地址的最大内存,这个统计有什么好处?因为第二次的映射需要保证物理内存连续的其虚拟内存也是连续的,在获取了最大连续物理内存大小后,比如是100个页面大小,会调用 get_virtual_area 函数向内涵申请100个页面大小的虚拟空间,如果成功,说明虚拟地址可以满足,然后循环100次,每次映射mmap的首个参数就是get_virtual_area函数返回的虚拟地址+i*页面大小,这样,这100个页面的虚拟地址和物理地址都是连续的,虚拟地址存放到final_va 变量上。

那么究竟是如何找到连续的虚拟地址空间呢?

点击(此处)折叠或打开

  1. static void *
  2. get_virtual_area(size_t *size, size_t hugepage_sz)
  3. {
  4.          void *addr;
  5.          int fd;
  6.          long aligned_addr;
  7.  
  8.          if (internal_config.base_virtaddr != 0) {
  9.                   addr = (void*) (uintptr_t) (internal_config.base_virtaddr +
  10.                                     baseaddr_offset);
  11.          }
  12.          else addr = NULL;
  13.  
  14.          RTE_LOG(DEBUG, EAL, "Ask a virtual area of 0x%zx bytes\n", *size);
  15.  
  16.          fd = open("/dev/zero", O_RDONLY);
  17.          if (fd < 0){
  18.                   RTE_LOG(ERR, EAL, "Cannot open /dev/zero\n");
  19.                   return NULL;
  20.          }
  21.          do {
  22.                   /*注意这里使用的私有映射*/
  23.                   addr = mmap(addr,
  24.                                     (*size) + hugepage_sz, PROT_READ, MAP_PRIVATE, fd, 0);
  25.                   if (addr == MAP_FAILED)
  26.                            *size -= hugepage_sz;
  27.          } while (addr == MAP_FAILED && *size > 0);/*未必有这么大的连续虚拟空间,所以如果失败需要减少虚拟空间大小*/
  28.  
  29.          if (addr == MAP_FAILED) {
  30.                   close(fd);
  31.                   RTE_LOG(ERR, EAL, "Cannot get a virtual area: %s\n",
  32.                            strerror(errno));
  33.                   return NULL;
  34.          }
  35.  
  36.          munmap(addr, (*size) + hugepage_sz);
  37.          close(fd);
  38.  
  39.          /* align addr to a huge page size boundary */
  40.          aligned_addr = (long)addr;
  41.          aligned_addr += (hugepage_sz - 1);
  42.          aligned_addr &= (~(hugepage_sz - 1));
  43.          addr = (void *)(aligned_addr);
  44.  
  45.          RTE_LOG(DEBUG, EAL, "Virtual area found at %p (size = 0x%zx)\n",
  46.                   addr, *size);
  47.  
  48.          /* increment offset */
  49.          baseaddr_offset += *size;
  50.  
  51.          return addr;
  52. }

    下面看 find_physaddr的实现过程。

l   find_physaddr

    这个函数的作用就是找到hugepage数组里每个虚拟地址对应的物理地址,并存放到 phyaddr变量上,最终实现由函数rte_mem_virt2phy(const void * virt)函数实现,其原理相当于页表查找,主要是通过linux的页表文件 /proc/self/pagemap 实现。/proc/self/pagemap 页表文件记录了本进程的页表,即本进程虚拟地址到物理地址的映射关系,主要是通过虚拟地址的前面若干位定位到物理页框,然后物理页框+虚拟地址偏移构成物理地址,其实现如下

点击(此处)折叠或打开

  1. phys_addr_t
  2. rte_mem_virt2phy(const void *virtaddr)
  3. {
  4.          int fd, retval;
  5.          uint64_t page, physaddr;
  6.          unsigned long virt_pfn;
  7.          int page_size;
  8.          off_t offset;
  9.  
  10.          /* when using dom0, /proc/self/pagemap always returns 0, check in
  11.           * dpdk memory by browsing the memsegs */
  12.          if (rte_xen_dom0_supported()) {
  13.                   struct rte_mem_config *mcfg;
  14.                   struct rte_memseg *memseg;
  15.                   unsigned i;
  16.  
  17.                   mcfg = rte_eal_get_configuration()->mem_config;
  18.                   for (i = 0; i < RTE_MAX_MEMSEG; i++) {
  19.                            memseg = &mcfg->memseg[i];
  20.                            if (memseg->addr == NULL)
  21.                                     break;
  22.                            if (virtaddr > memseg->addr &&
  23.                                              virtaddr < RTE_PTR_ADD(memseg->addr,
  24.                                                       memseg->len)) {
  25.                                     return memseg->phys_addr +
  26.                                              RTE_PTR_DIFF(virtaddr, memseg->addr);
  27.                            }
  28.                   }
  29.  
  30.                   return RTE_BAD_PHYS_ADDR;
  31.          }
  32.  
  33.          /* Cannot parse /proc/self/pagemap, no need to log errors everywhere */
  34.          if (!proc_pagemap_readable)
  35.                   return RTE_BAD_PHYS_ADDR;
  36.  
  37.          /* standard page size */
  38.          page_size = getpagesize();
  39.  
  40.          fd = open("/proc/self/pagemap", O_RDONLY);
  41.          if (fd < 0) {
  42.                   RTE_LOG(ERR, EAL, "%s(): cannot open /proc/self/pagemap: %s\n",
  43.                            __func__, strerror(errno));
  44.                   return RTE_BAD_PHYS_ADDR;
  45.          }
  46.  
  47.          virt_pfn = (unsigned long)virtaddr / page_size;
  48.          offset = sizeof(uint64_t) * virt_pfn;
  49.          if (lseek(fd, offset, SEEK_SET) == (off_t) -1) {
  50.                   RTE_LOG(ERR, EAL, "%s(): seek error in /proc/self/pagemap: %s\n",
  51.                                     __func__, strerror(errno));
  52.                   close(fd);
  53.                   return RTE_BAD_PHYS_ADDR;
  54.          }
  55.  
  56.          retval = read(fd, &page, PFN_MASK_SIZE);
  57.          close(fd);
  58.          if (retval < 0) {
  59.                   RTE_LOG(ERR, EAL, "%s(): cannot read /proc/self/pagemap: %s\n",
  60.                                     __func__, strerror(errno));
  61.                   return RTE_BAD_PHYS_ADDR;
  62.          } else if (retval != PFN_MASK_SIZE) {
  63.                   RTE_LOG(ERR, EAL, "%s(): read %d bytes from /proc/self/pagemap "
  64.                                     "but expected %d:\n",
  65.                                     __func__, retval, PFN_MASK_SIZE);
  66.                   return RTE_BAD_PHYS_ADDR;
  67.          }
  68.  
  69.          /*
  70.           * the pfn (page frame number) are bits 0-54 (see
  71.           * pagemap.txt in linux Documentation)
  72.           */
  73.          physaddr = ((page & 0x7fffffffffffffULL) * page_size)
  74.                   + ((unsigned long)virtaddr % page_size);
  75.  
  76.          return physaddr;
  77. }

l   find_numasocket

    下面看 find_numasocket的实现过程这个函数的作用是找到hugepage数组里每个虚拟地址对应的物理cpu号,基本原理是通过linux提供的 /proc/self/numa_maps 文件,

/proc/self/numa_maps 文件记录了本 进程的虚拟地址与物理cpu号(多核系统)的对应关系,在遍历的时候将非huge page的虚拟地址过滤掉,剩下的虚拟地址与hugepage数组里的orig_va 比较,实现如下:

点击(此处)折叠或打开

  1. static int
  2. find_numasocket(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi)
  3. {
  4.          int socket_id;
  5.          char *end, *nodestr;
  6.          unsigned i, hp_count = 0;
  7.          uint64_t virt_addr;
  8.          char buf[BUFSIZ];
  9.          char hugedir_str[PATH_MAX];
  10.          FILE *f;
  11.  
  12.          f = fopen("/proc/self/numa_maps", "r");
  13.          if (f == NULL) {
  14.                   RTE_LOG(NOTICE, EAL, "cannot open /proc/self/numa_maps,"
  15.                                     " consider that all memory is in socket_id 0\n");
  16.                   return 0;
  17.          }
  18.  
  19.          snprintf(hugedir_str, sizeof(hugedir_str),
  20.                            "%s/%s", hpi->hugedir, internal_config.hugefile_prefix);
  21.  
  22.          /* parse numa map */
  23.          while (fgets(buf, sizeof(buf), f) != NULL) {
  24.  
  25.                   /* ignore non huge page */
  26.                   if (strstr(buf, " huge ") == NULL &&
  27.                                     strstr(buf, hugedir_str) == NULL)
  28.                            continue;
  29.  
  30.                   /* get zone addr */
  31.                   virt_addr = strtoull(buf, &end, 16);
  32.                   if (virt_addr == 0 || end == buf) {
  33.                            RTE_LOG(ERR, EAL, "%s(): error in numa_maps parsing\n", __func__);
  34.                            goto error;
  35.                   }
  36.  
  37.                   /* get node id (socket id) */
  38.                   nodestr = strstr(buf, " N");
  39.                   if (nodestr == NULL) {
  40.                            RTE_LOG(ERR, EAL, "%s(): error in numa_maps parsing\n", __func__);
  41.                            goto error;
  42.                   }
  43.                   nodestr += 2;
  44.                   end = strstr(nodestr, "=");
  45.                   if (end == NULL) {
  46.                            RTE_LOG(ERR, EAL, "%s(): error in numa_maps parsing\n", __func__);
  47.                            goto error;
  48.                   }
  49.                   end[0] = '\0';
  50.                   end = NULL;
  51.  
  52.                   socket_id = strtoul(nodestr, &end, 0);
  53.                   if ((nodestr[0] == '\0') || (end == NULL) || (*end != '\0')) {
  54.                            RTE_LOG(ERR, EAL, "%s(): error in numa_maps parsing\n", __func__);
  55.                            goto error;
  56.                   }
  57.  
  58.                   /* if we find this page in our mappings, set socket_id */
  59.                   for (i = 0; i < hpi->num_pages[0]; i++) {
  60.                            void *va = (void *)(unsigned long)virt_addr;
  61.                            if (hugepg_tbl[i].orig_va == va) {
  62.                                     hugepg_tbl[i].socket_id = socket_id;
  63.                                     hp_count++;
  64.                            }
  65.                   }
  66.          }
  67.  
  68.          if (hp_count < hpi->num_pages[0])
  69.                   goto error;
  70.  
  71.          fclose(f);
  72.          return 0;
  73.  
  74. error:
  75.          fclose(f);
  76.          return -1;
  77. }

    sort_by_physaddr 根据hugepage结构的phyaddr 排序,比较简单unmap_all_hugepages_orig 调用 mumap 系统调用将 hugepage结构的orig_va 虚拟地址返回给内核。

 

上面几步就完成了hugepage数组的构造,现在这个数组对应了某个hugetlbfs系统的大页面,数组的每一个节点是一个hugepage结构,该结构的phyaddr存放着该页面的物理内存地址,final_va存放着phyaddr映射到进程空间的虚拟地址,socket_id存放着物理cpu号,如果多个hugepage结构的final_va虚拟地址是连续的,则其 phyaddr物理地址也是连续的

 

下面是rte_eal_hugepage_init函数的余下部分,我们知道之前的进程是对整个系统的可用页面进行mmap,但是我们进程实际并不需要这么多内存,所以需要对多余的内存进行释放,接下来的一段代码就是在做这个工作。

点击(此处)折叠或打开

  1. if (internal_config.memory == 0 && internal_config.force_sockets == 0)
  2.                   internal_config.memory = eal_get_hugepage_mem_size();
  3.  
  4.          nr_hugefiles = nr_hugepages;
  5.  
  6.  
  7.          /* clean out the numbers of pages */
  8.          /*清除hugepage_info中的page数量信息,因为之前将所有hugepage记录在了socket 0上*/
  9.          for (i = 0; i < (int) internal_config.num_hugepage_sizes; i++)
  10.                   for (j = 0; j < RTE_MAX_NUMA_NODES; j++)
  11.                            internal_config.hugepage_info[i].num_pages[j] = 0;
  12.          /*根据之前查找的每个page的socket信息,重新更新每个socket上的hugepage计数*/
  13.          /* get hugepages for each socket */
  14.          for (i = 0; i < nr_hugefiles; i++) {
  15.                   int socket = tmp_hp[i].socket_id;
  16.  
  17.                   /* find a hugepage info with right size and increment num_pages */
  18.                   const int nb_hpsizes = RTE_MIN(MAX_HUGEPAGE_SIZES,
  19.                                     (int)internal_config.num_hugepage_sizes);
  20.                   for (j = 0; j < nb_hpsizes; j++) {
  21.                            if (tmp_hp[i].size ==
  22.                                              internal_config.hugepage_info[j].hugepage_sz) {
  23.                                     internal_config.hugepage_info[j].num_pages[socket]++;
  24.                            }
  25.                   }
  26.          }
  27.          /*memory[i] 记录着当前socket所需要申请的内存数量,这是通过参数指定的*/
  28.          /* make a copy of socket_mem, needed for number of pages calculation */
  29.          for (i = 0; i < RTE_MAX_NUMA_NODES; i++)
  30.                   memory[i] = internal_config.socket_mem[i];
  31.  
  32.          /* calculate final number of pages */
  33.          /*这个函数会根据当前进程指定所需要的实际内存大小计算出所需要实际的hugepage页面数量,之前我们mmap了系统的所有free的页面,但程序实际并不需要这么多,所以我们需要计算出实际需要的页面数量,而used_hp记录了实际需要内存大小的hugepage信息*/
  34.          nr_hugepages = calc_num_pages_per_socket(memory,
  35.                            internal_config.hugepage_info, used_hp,
  36.                            internal_config.num_hugepage_sizes);
  37.  
  38.          /* error if not enough memory available */
  39.          if (nr_hugepages < 0)
  40.                   goto fail;
  41.  
  42.          /* reporting */
  43.          for (i = 0; i < (int) internal_config.num_hugepage_sizes; i++) {
  44.                   for (j = 0; j < RTE_MAX_NUMA_NODES; j++) {
  45.                            if (used_hp[i].num_pages[j] > 0) {
  46.                                     RTE_LOG(DEBUG, EAL,
  47.                                              "Requesting %u pages of size %uMB"
  48.                                              " from socket %i\n",
  49.                                              used_hp[i].num_pages[j],
  50.                                              (unsigned)
  51.                                              (used_hp[i].hugepage_sz / 0x100000),
  52.                                              j);
  53.                            }
  54.                   }
  55.          }
  56.          /*创建存放 hugepage_file结构的共享内存文件,注意这里nr_hugefiles是系统所有可用的页面数,而不仅是程序所需要的页面数*/
  57.          /* create shared memory */
  58.          hugepage = create_shared_memory(eal_hugepage_info_path(),
  59.                            nr_hugefiles * sizeof(struct hugepage_file));
  60.  
  61.          if (hugepage == NULL) {
  62.                   RTE_LOG(ERR, EAL, "Failed to create shared memory!\n");
  63.                   goto fail;
  64.          }
  65.          memset(hugepage, 0, nr_hugefiles * sizeof(struct hugepage_file));
  66.  
  67.          /*
  68.           * unmap pages that we won't need (looks at used_hp).
  69.           * also, sets final_va to NULL on pages that were unmapped.
  70.           */
  71.           /*对之前多mmap的页面进行unmmap,毕竟我们不需要那么多内存*/
  72.          if (unmap_unneeded_hugepages(tmp_hp, used_hp,
  73.                            internal_config.num_hugepage_sizes) < 0) {
  74.                   RTE_LOG(ERR, EAL, "Unmapping and locking hugepages failed!\n");
  75.                   goto fail;
  76.          }
  77.  
  78.          /*
  79.           * copy stuff from malloc'd hugepage* to the actual shared memory.
  80.           * this procedure only copies those hugepages that have final_va
  81.           * not NULL. has overflow protection.
  82.           */
  83.           /*将hugepage_file数组拷贝到共享内存文件,注意只拷贝final_va不为NULL的结构,而前面我们已经将不需要的页面设置为NULL,所以这里只拷贝的是程序实际所需要的页面结构*/
  84.          if (copy_hugepages_to_shared_mem(hugepage, nr_hugefiles,
  85.                            tmp_hp, nr_hugefiles) < 0) {
  86.                   RTE_LOG(ERR, EAL, "Copying tables to shared memory failed!\n");
  87.                   goto fail;
  88.          }
  89.  
  90.     /*如果设置了internal_config.hugepage_unlink,则将程序使用的页面mmap文件进行unlink,即删除磁盘文件,注意这并不影响程序对内存的使用*/
  91.          /* free the hugepage backing files */
  92.          if (internal_config.hugepage_unlink &&
  93.                   unlink_hugepage_files(tmp_hp, internal_config.num_hugepage_sizes) < 0) {
  94.                   RTE_LOG(ERR, EAL, "Unlinking hugepage files failed!\n");
  95.                   goto fail;
  96.          }
  97.  
  98.          /* free the temporary hugepage table */
  99.          free(tmp_hp);
  100.          tmp_hp = NULL;

   这一步之后,就会创建磁盘文件.rte_hugepage_info,其中存放着当前进程实际使用的的hugepage信息。并释放hugepage数组,其他进程通过映射 hugepage_info 文件就可以获取 hugepage数组,从而管理hugepage共享内存。

 

下面是rte_eal_hugepage_init函数的最后一部分。主要分两个方面,一是将hugepage数组里属于同一个物理cpu,物理内存连续的多个hugepage 用一层 memseg 结构管理起来。 一个memseg 结构维护的内存必然是同一个物理cpu上的,虚拟地址和物理地址都连续的内存,最终的memzone 接口是通过操作memseg实现的;2是将 hugepage数组和memseg数组的信息记录到共享文件里,方便从进程获取;

点击(此处)折叠或打开

  1. /* first memseg index shall be 0 after incrementing it below */
  2.          j = -1;
  3.          for (i = 0; i < nr_hugefiles; i++) {
  4.                   new_memseg = 0;
  5.  
  6.                   /* if this is a new section, create a new memseg */
  7.                   if (i == 0)
  8.                            new_memseg = 1;
  9.                   else if (hugepage[i].socket_id != hugepage[i-1].socket_id)
  10.                            new_memseg = 1;
  11.                   else if (hugepage[i].size != hugepage[i-1].size)
  12.                            new_memseg = 1;
  13.  
  14.                   else if ((hugepage[i].physaddr - hugepage[i-1].physaddr) !=
  15.                       hugepage[i].size)
  16.                            new_memseg = 1;
  17.                   else if (((unsigned long)hugepage[i].final_va -
  18.                       (unsigned long)hugepage[i-1].final_va) != hugepage[i].size)
  19.                            new_memseg = 1;
  20.  
  21.                   if (new_memseg) {/*新建的memseg,用首个hugepage作为初始值*/
  22.                            j += 1;
  23.                            if (j == RTE_MAX_MEMSEG)
  24.                                     break;
  25.  
  26.                            mcfg->memseg[j].phys_addr = hugepage[i].physaddr;
  27.                            mcfg->memseg[j].addr = hugepage[i].final_va;
  28.                            mcfg->memseg[j].len = hugepage[i].size;
  29.                            mcfg->memseg[j].socket_id = hugepage[i].socket_id;
  30.                            mcfg->memseg[j].hugepage_sz = hugepage[i].size;
  31.                   }
  32.                   /* continuation of previous memseg */
  33.                   else {/*非新建的memseg*/
  34.                            mcfg->memseg[j].len += mcfg->memseg[j].hugepage_sz;
  35.                   }
  36.                   hugepage[i].memseg_id = j;/*将memseg_id放在hugepage中*/
  37.          }

    这个函数后,整个内存状态如下所示。

l  rte_eal_memzone_init

最后我们看rte_eal_memzone_init函数,这个函数内部主要调用了rte_eal_malloc_heap_init,我们直接看这个函数。

点击(此处)折叠或打开

  1. int
  2. rte_eal_malloc_heap_init(void)
  3. {
  4.          struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
  5.          unsigned ms_cnt;
  6.          struct rte_memseg *ms;
  7.  
  8.          if (mcfg == NULL)
  9.                   return -1;
  10.     /*遍历所有memseg,对每个memseg调用malloc_heap_add_memseg*/
  11.          for (ms = &mcfg->memseg[0], ms_cnt = 0;
  12.                            (ms_cnt < RTE_MAX_MEMSEG) && (ms->len > 0);
  13.                            ms_cnt++, ms++) {
  14.                   malloc_heap_add_memseg(&mcfg->malloc_heaps[ms->socket_id], ms);
  15.          }
  16.  
  17.          return 0;
  18. }

      malloc_heap_add_memseg主要是为这个memseg创建相应的内存管理结构。

l  malloc_heap_add_memseg

点击(此处)折叠或打开

  1. /*
  2.  * Expand the heap with a memseg.
  3.  * This reserves the zone and sets a dummy malloc_elem header at the end
  4.  * to prevent overflow. The rest of the zone is added to free list as a single
  5.  * large free block
  6.  */
  7. static void
  8. malloc_heap_add_memseg(struct malloc_heap *heap, struct rte_memseg *ms)
  9. {
  10.          /* allocate the memory block headers, one at end, one at start */
  11.          /*start_elem位于这个memseg(一段连续物理内存)的首部*/
  12.          struct malloc_elem *start_elem = (struct malloc_elem *)ms->addr;
  13.          /*end_elem位于这个memseg(一段连续物理内存)的尾部(还留了MALLOC_ELEM_OVERHEAD的空间防止越界)*/
  14.          struct malloc_elem *end_elem = RTE_PTR_ADD(ms->addr,
  15.                            ms->len - MALLOC_ELEM_OVERHEAD);
  16.          end_elem = RTE_PTR_ALIGN_FLOOR(end_elem, RTE_CACHE_LINE_SIZE);
  17.          /*elem_size为这段连续内存的大小*/
  18.          const size_t elem_size = (uintptr_t)end_elem - (uintptr_t)start_elem;
  19.     /*初始化start_elem成员,其state设置为ELEM_FREE*/
  20.          malloc_elem_init(start_elem, heap, ms, elem_size);
  21.          /*初始化end_elem,其state设置为ELEM_BUSY,其pre指向start_elem*/
  22.          malloc_elem_mkend(end_elem, start_elem);
  23.          /*根据elem_size从free_head->free_head中找到合适的idx指向start_elem*/
  24.          malloc_elem_free_list_insert(start_elem);
  25.  
  26.          heap->total_size += elem_size;
  27. }

这段代码执行后,数据结构及内存关系就如下图所示。

   下面说下如何根据elem_size(也就是memseg对应的连续内存大小)free_head->free_head中找到合适的idx的。free_head->free_head按照连续内存大小,划分问若干个链表,如下所示:

 * Example element size ranges for a heap with five free lists:

 *   heap->free_head[0] - (0   , 2^8]

 *   heap->free_head[1] - (2^8 , 2^10]

 *   heap->free_head[2] - (2^10 ,2^12]

 *   heap->free_head[3] - (2^12, 2^14]

 *   heap->free_head[4] - (2^14, MAX_SIZE]

阅读(14759) | 评论(0) | 转发(0) |
0

上一篇:mmap实现分析

下一篇:Linux GSO逻辑分析

给主人留下些什么吧!~~