Chinaunix首页 | 论坛 | 博客
  • 博客访问: 22147
  • 博文数量: 10
  • 博客积分: 0
  • 博客等级: 民兵
  • 技术积分: 140
  • 用 户 组: 普通用户
  • 注册时间: 2014-03-05 00:47
文章分类

全部博文(10)

文章存档

2015年(1)

2014年(9)

我的朋友

分类: 虚拟化

2014-12-05 18:09:40

Xenheap 和domheap的区别

Ian Campbell在Xen-devel里的回答

There are two models for xen vs. domheap, and therefore two version of

init_*heap_pages.

 

The original model is the split heap model, which is used on platforms

which have smaller virtual address spaces. e.g. arm32, for the moment

arm64 (but I am about to switch to the second model) and historically

the x86_32 platform. This is because as Andy notes xenheap must always

be mapped while domheap is not (and cannot be on these platforms),

domheap is mapped only on demand (map_domain_page).

In this case init_xenheap_pages contains:

    /*

     * Yuk! Ensure there is a one-page buffer between Xen and Dom zones, to

     * prevent merging of power-of-two blocks across the zone boundary.

     */

    if ( ps && !is_xen_heap_mfn(paddr_to_pfn(ps)-1) )

        ps += PAGE_SIZE;

    if ( !is_xen_heap_mfn(paddr_to_pfn(pe)) )

        pe -= PAGE_SIZE;

 

The second model is used on systems which have large enough virtual

address to map all of RAM, currently x86_64 and soon arm64. In this case

there is only one underlying pool of memory and the split is more

logical than real, although it is tracked by setting PGC_xen_heap when

allocating xenheap pages. In this case domheap is actually always mapped

but you still must use map_domain_page to access it (so common code work

on both models)

 

There is actually an extension to the second model for systems which

have enourmous amounts of physical memory (e.g. >5TB on x86_64) which

brings back the xen/domheap split but in a different way to the first

model. In this case the split is implemented in alloc_xenheap_pages by

consulting xenheap_bits to restrict the allocations to only the direct

mapped region.


Andrew Cooper的回答

xenheap pages have permanent mappings in the Xen virtual address space, so they can be accessed from anywhere in the code.  domheap pages by default do not have mappings, and must be explicitly mapped to be used.

For PV guests, Xen hand most of the virtual address space to the guest, so only the really critical memory can have permanent mappings.


就只说ARM32吧,在ARM32上是分xenheap和domheap的,因为32位cpu地址空间只有4G,物理内存却可以比4G多,所以不是所有物理内存都有地址映射。

alloc_xenheap_pages()分配出来的页面,是map在xenheap地址空间的,从函数原型可以看出

  1. void *alloc_xenheap_pages(unsigned int order, unsigned int memflags);


返回的直接是xen地址空间的指针,不需要在调用map_pages_to_xen()了。并且__pa()和__va()这两个宏只能用于xenheap pages。

xen\xen\include\asm-arm\mm.h

  1. /* Convert between Xen-heap virtual addresses and machine addresses. */
  2. #define __pa(x) (virt_to_maddr(x))
  3. #define __va(x) (maddr_to_virt(x))
  4.  
  5. /* Convert between Xen-heap virtual addresses and machine frame numbers. */
  6. #define virt_to_mfn(va) (virt_to_maddr(va) >> PAGE_SHIFT)
  7. #define mfn_to_virt(mfn) (maddr_to_virt((paddr_t)(mfn) << PAGE_SHIFT))

而alloc_domheap_pages()分配出来的页面是没有map的,因此只能通过mfn来操作这个页面。


  1. struct page_info *alloc_domheap_pages(
  2.     struct domain *d, unsigned int order, unsigned int memflags);


可以看到返回的是struct page_info*结构指针,想读写页面还得先map。而map的方式有三种:

1

xen\xen\arch\arm\mm.c

  1. void *map_domain_page_global(unsigned long mfn)
  2. {
  3.       return vmap(&mfn, 1);
  4. }


这种最好理解,映射到ioremap的256M-1G空间了,至于为什么是global的,貌似和PCPU VCPU相关的页表有关,我还没弄明白。看来并不是只有device能用这段地址。

xen\xen\include\xen\domain_page.h


  1. /*
  2.  * Similar to the above calls, except the mapping is accessible in all
  3.  * address spaces (not just within the VCPU that created the mapping). Global
  4.  * mappings can also be unmapped from any context.
  5.  */
  6. void *map_domain_page_global(unsigned long mfn);
  7. void unmap_domain_page_global(const void *va);



2

  1. /*
  2.  * Map a given page frame, returning the mapped virtual address. The page is
  3.  * then accessible within the current VCPU until a corresponding unmap call.
  4.  */
  5. void *map_domain_page(unsigned long mfn);
  6.  
  7. /*
  8.  * Pass a VA within a page previously mapped in the context of the
  9.  * currently-executing VCPU via a call to map_domain_page().
  10.  */
  11. void unmap_domain_page(const void *va);


调用map_domain_page()后,用返回的指针在xen的代码里就可以直接读写页面了,但我不明白为什么注释里说和VCPU有关。并且alloc_domheap_pages()第一个参数是domain,可以为NULL,也可以是domain。不同的domain,它们的dom page可以被别的domain访问吗?

看一个用到这个函数的例子

xen\xen\arch\arm\domain_build.c


点击(此处)折叠或打开

  1. static void initrd_load(struct kernel_info *kinfo)
  2. {
  3.     paddr_t load_addr = kinfo->initrd_paddr;
  4.     paddr_t paddr = early_info.modules.module[MOD_INITRD].start;
  5.     paddr_t len = early_info.modules.module[MOD_INITRD].size;
  6.     unsigned long offs;
  7.     int node;
  8.     int res;
  9.     __be32 val[2];
  10.     __be32 *cellp;
  11.  
  12.     if ( !len )
  13.         return;
  14.  
  15.     printk("Loading dom0 initrd from %"PRIpaddr" to 0x%"PRIpaddr"-0x%"PRIpaddr"\n",
  16.            paddr, load_addr, load_addr + len);
  17.  
  18.     /* Fix up linux,initrd-start and linux,initrd-end in /chosen */
  19.     node = fdt_path_offset(kinfo->fdt, "/chosen");
  20.     if ( node < 0 )
  21.         panic("Cannot find the /chosen node");
  22.  
  23.     cellp = (__be32 *)val;
  24.     dt_set_cell(&cellp, ARRAY_SIZE(val), load_addr);
  25.     res = fdt_setprop_inplace(kinfo->fdt, node, "linux,initrd-start",
  26.                               val, sizeof(val));
  27.     if ( res )
  28.         panic("Cannot fix up \"linux,initrd-start\" property");
  29.  
  30.     cellp = (__be32 *)val;
  31.     dt_set_cell(&cellp, ARRAY_SIZE(val), load_addr + len);
  32.     res = fdt_setprop_inplace(kinfo->fdt, node, "linux,initrd-end",
  33.                               val, sizeof(val));
  34.     if ( res )
  35.         panic("Cannot fix up \"linux,initrd-end\" property");
  36.  
  37.     for ( offs = 0; offs < len; )
  38.     {
  39.         int rc;
  40.         paddr_t s, l, ma;
  41.         void *dst;
  42.  
  43.         s = offs & ~PAGE_MASK;
  44.         l = min(PAGE_SIZE - s, len);
  45.  
  46.         rc = gvirt_to_maddr(load_addr + offs, &ma);
  47.         if ( rc )
  48.         {
  49.             panic("Unable to translate guest address");
  50.             return;
  51.         }
  52.  
  53.         dst = map_domain_page(ma>>PAGE_SHIFT);
  54.  
  55.         copy_from_paddr(dst + s, paddr + offs, l, BUFFERABLE);
  56.  
  57.         unmap_domain_page(dst);
  58.         offs += l;
  59.     }
  60. }


这是在创建dom0的时候调用的initrd_load(),这时候和VCPU有关了吗?


3


  1. /* Map machine page range in Xen virtual address space. */
  2. int map_pages_to_xen(
  3.     unsigned long virt,
  4.     unsigned long mfn,
  5.     unsigned long nr_mfns,
  6.     unsigned int flags);


这个是可以map到指定的virtual address。

xen\xen\common\vmap.c


  1. int map_pages_to_xen(unsigned long virt,
  2.                      unsigned long mfn,
  3.                      unsigned long nr_mfns,
  4.                      unsigned int flags)
  5. {
  6.     return create_xen_entries(INSERT, virt, mfn, nr_mfns, flags);
  7. }


vmap()也是调的 create_xen_entries(),说明这个函数可以映射到xen的任意地址空间,不知道这样说对不对,可以映射到domheap地址空间吗?

Sure, you could take a xenheap page, extract the mfn and use
map_domain_page, but why?

从Ian的回答看,应该是可以的,至少用map_domain_page()是可以的。

用到map_pages_to_xen()的主要就是在__vmap()里。

xen\xen\common\vmap.c


点击(此处)折叠或打开

  1. void *__vmap(const unsigned long *mfn, unsigned int granularity,
  2.                    unsigned int nr, unsigned int align, unsigned int flags)
  3. {
  4.       void *va = vm_alloc(nr * granularity, align);
  5.       unsigned long cur = (unsigned long)va;
  6.  
  7.       for ( ; va && nr--; ++mfn, cur += PAGE_SIZE * granularity )
  8.       {
  9.             if ( map_pages_to_xen(cur, *mfn, granularity, flags) )
  10.             {
  11.                   vunmap(va);
  12.                   va = NULL;
  13.             }
  14.       }
  15.  
  16.       return va;
  17. }


因为在256M-1G的VMAP地址空间, 前几个页面是bitmap机制自己用的。


如果不涉及到VCPU相关,很容易理解domheap的用法,和Windows里也差不多,但分配domheap的时候是和domain相关的,这个还要再看。


可以理解allocate_memory_11里为什么用alloc_domheap_pages()而不是xenheap,因为是分配给dom0的内存,xen不需要操作这个页面,只是知道这些物理页已经被分配了就可以了,所以xen不要把这些页面映射到xen自己的地址空间来。这些页面已经通过p2m机制,也就是create_p2m_entries()映射到dom0中了,所以dom0里的系统是可以用这些物理页的。



http://lists.xen.org/archives/html/xen-devel/2013-08/msg00702.html

http://lists.xen.org/archives/html/xen-devel/2013-08/msg00695.html

http://lists.xen.org/archives/html/xen-devel/2013-08/msg00871.html

http://lists.xen.org/archives/html/xen-devel/2013-08/msg01361.html











阅读(1287) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~