Chinaunix首页 | 论坛 | 博客
  • 博客访问: 1255761
  • 博文数量: 494
  • 博客积分: 161
  • 博客等级: 入伍新兵
  • 技术积分: 5084
  • 用 户 组: 普通用户
  • 注册时间: 2011-07-01 07:37











2013-06-18 14:07:58

CPU vendors began adding hardware virtual memory management unit (vMMU) support circa 2009, with Intel's VT-x (vmx flag) addition. Historically, the guest's physical (gpa) to host physical  (hpa) addresses where translated through software, using shadow page tables. These tables are kept synchronized with the guest's page tables, and are one of the main sources of overhead in virtual machines, as they incur in expensive vm exits. A common way of keeping the shadow pages up to date are to write-protect the guest's pages, so that when they are changed, page faults are triggered and intercepted by the VMM, which emulates it (injecting the page) and updating the shadow ones, accordingly. This, of course, is transparent to the guest. Another major problem, is that TLB semantics require flushes upon context switching, as newly assigned processes need to have it empty to cache entries only belonging to the process's address space.To overcome this, CPUs now incorporate包含 tags into the TLB - also known as vpid, which allow mapping that associate addresses to processes and thus reducing the amount of flushes.

     With hardware vMMUs, in order to avoid the VMM overhead with shadow paging,the guest is left alone to update its page tables, while the hardware maintains its own page tables which maps gpa to hpa. Intel calls these Extended Page Tables (EPT). Having two page tables now requires that when a guest translates and address, two levels must be walked (sometimesreferred to as 2D page walks).

So hardware support can come at a greater cost for programs with bad locality and cache unfriendly, than its software equivalent. When a TLB miss occurs, and the guest does a page walk,for each hierarchical分层的 level, the entire EPT must be walked as well, to obtain the hpa. For 64bit guests, this is worse than 32bit ones,  as the 64bit address space requires more levels (PML4, PDP, PD, PTE) of translation.

    KVM's implementation of EPT is quite unique and uses both the guest's tables and the hardware's to translate addresses. When a guest needs to translate virtual addresses to physical ones, thegva_to_gpa()function is called:
  1. static gpa_t FNAME(gva_to_gpa)(struct kvm_vcpu *vcpu, gva_t vaddr)  
  2. {  
  3.         struct guest_walker walker;  
  4.         gpa_t gpa = UNMAPPED_GVA;  
  5.         int r;  
  7.         r = FNAME(walk_addr)(&walker, vcpu, vaddr, 0, 0, 0);  
  9.         if (r) {  
  10.                 gpa = gfn_to_gpa(walker.gfn);  
  11.                 gpa |= vaddr & ~PAGE_MASK;  
  12.         }  
  14.         return gpa;  
  15. }        

If the guest's walk fails and the gva-gpa mapping is not present, a page fault is raised, andtdp_page_fault() - two diminutional paging - is invoked through an EPT violation -handle_ept_violation() to translate gpa to hpa. A new page table entry is created and the shadow page code is reused throughmmu_set_spte()and added to the beginning of the page list throughpte_list_add(). This way, the next time the guest virtual address is accessed, it will already be in the guest's pages and walk_addr() will be done successfully, and the gpa can be returned without further a due.

阅读(795) | 评论(0) | 转发(0) |

登录 注册