Chinaunix首页 | 论坛 | 博客
  • 博客访问: 591251
  • 博文数量: 197
  • 博客积分: 7001
  • 博客等级: 大校
  • 技术积分: 2155
  • 用 户 组: 普通用户
  • 注册时间: 2005-02-24 00:29
文章分类

全部博文(197)

文章存档

2022年(1)

2019年(2)

2015年(1)

2012年(100)

2011年(69)

2010年(14)

2007年(3)

2005年(7)

分类: 虚拟化

2012-06-23 16:36:41

论文

该论文提出的算法前提是requires a paravirtualized guest DMA mapping interface,不能用于完全虚拟化,这是一个非常大的限制。此外,本文形式化某些概念显得较为勉强,quota的实现也不太清晰。

The DMA Mapping Problem的本质:when should a page of memory be mapped or unmapped in the IOMMU?

首先对一些概念的总结:
The early direct access implementations used one of two extreme approaches for DMA mapping: they either mapped all of a guest operating system’s memory up-front (thus incurring minimal run-time overhead), or they only mapped memory once immediately before it was DMA’d to or from, and unmapped it immediately when the DMA operation was done [7]. Willman, Rixner, and Cox named these strategies direct mapping and single-use mapping, respectively. In addition,

they presented two other strategies: shared mapping and persistent mapping [31].

 Single-use mapping has a non-negligible performance overhead [8] but protects the guest’s memory from malicious devices and buggy drivers. Thus it sacrifices performance for reduced memory consumption and increased protection. Direct mapping, on the other hand, is transparent to the guest and requires minimal CPU overhead—but requires pinning all of the guest’s memory, and provides no protection inside a guest (intra-guest protection), only between different guests (inter-guest protection). Thus it sacrifices memory and protection for increased performance.

 Shared mapping and persistent mapping provide different tradeoffs between performance, memory consumption, and protection. Shared mapping reuses a single mapping if more than one device is trying to DMA to the same memory location at the same time, and persistent mapping keeps mappings around once they have been created in case they will be reused in the future.

 本文提出的on-demand mapping 派生于persistent mapping, 不过引入了 quota-based model,当超出配额时就逐出某些映射。guest要实现一个Map Cache来判定映射是否在IOMMU中,如果不在,则发起hypercall,此外依赖于ref count判定是否可以被逐出。 可以优化的措施有Batching Driver Mapping Requests(需要改驱动)和Prefetching Mappings(无需改驱动)


目前硬件设备的缺陷 no current hardware supports I/O page faults。there is no mechanism for I/O page faults in current IOMMUs, I/O devices, and protocols [27]

将来趋势:IOMMUs will end up resembling MMUs even more than they do today, and DMA memory management algorithms will keep inching closer to CPU memory management algorithms.



阅读(1066) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~