Chinaunix首页 | 论坛 | 博客
  • 博客访问: 586538
  • 博文数量: 197
  • 博客积分: 7001
  • 博客等级: 大校
  • 技术积分: 2155
  • 用 户 组: 普通用户
  • 注册时间: 2005-02-24 00:29
文章分类

全部博文(197)

文章存档

2022年(1)

2019年(2)

2015年(1)

2012年(100)

2011年(69)

2010年(14)

2007年(3)

2005年(7)

分类: LINUX

2011-10-07 21:37:54

在如下链接http://oss.oracle.com/projects/tmem/dist/documentation/presentations/MemMgmtVirtEnv-LPC2010-SpkNotes.pdf 做出了比较详尽的比较

Solution Set A: Each guest hogs all memory given to it
• Partitioning
  • NO overcommitment,guest内存之和不会超过机器总内存,总会有空余
• Host swapping(most VMM solutions have a way to secretly move some guest
pages out of physical memory and onto a disk.所以可以超过机器总内存)
  • SLOW overcommitment
    • like living in a swapstorm
• Transparent page sharing
  • “FAUX” (fake) overcommitment, but
    • advantage is very workload dependent
    • inconsistent, variable performance, “cliffs”
    • “semantic gap” between host and guest
“I personally have talked to some real customers of a certain proprietary virtualization company and those customers say they just turn this off because it only works on certain (air quotes) cloned workloads such as you might see in a classroom setting, and on many other workloads performance can get suddenly and unpredictably very very bad.”
Solution Set A Summary: So to summarize this first set, you have NO
overcommitment, SLOW overcommitment, and "FAUX" overcommitment, none of which
meet our objectives of maximizing RAM utilization without a performance hit. So let's
move on to the next set.


Solution Set B: Guest memory is dynamically adjustable
• Ballooning
    • unpredictable side effects the problem is that the balloon driver just does what it's told and if it sucks up too much memory, bad things can happen.sometimes causes host swapping (resulting in unpredictable performance)
    • very workload dependent
    • poor match for 2MB pages
• Hot plug memory
    • only useful for higher granularity
    • hot-plug interface not designed for high frequency changes or mid-size granularity
    • hot plug delete is problematic
Solution Set B Summary: So there IS a way to dynamically adjust memory size in a
running guest OS, but there's some issues and I haven't really gotten into detail yet on the impact of those issues. But the BIGgest issue really is that these are, not really solutions.
Solution Set B Summary (with RED): They are MECHanisms that provide a way to
adjust memory. BUT who is behind the magic curtain deciding how much memory and
when to take it or give it back and, if you've got lots of guests, how much to give or take from each one.

Solution Set C: Guests are dynamically “load balanced”
using some policy
• Guest-quantity-based policy
    • administrator presets memory “range” for each guest
    • balloons adjusted based on number of guests
    • does NOT respond to individual guest memory pressure
• Guest-pressure-driven host-control policy
    since guests are essentially fancy processes, has some information about the memory pressure each guest is experiencing, and it sends that information to a central controller in the host, which analyzes the information and decrees how much
memory each guest gets by basically dividing it up "fairly" and maybe leaving a little bit of fallow memory around just in case. But as I said earlier, it's often difficult to tell how efficiently any OS is using its memory, and the policy is only as good... 
      • collects host and guest memory stats, sends to customizable policy engine

      • controls all guest balloons, plus host page sharing (KSM)

      • shrinks all guests “fairly” scaled by host memory pressure

      BUT…

      • under-aggressive for idle guests,不够积极,under-aggressive ballooning limits migration(因为空闲内存少)

      • issues due to lack of omniscience


• Guest-pressure-driven guest-control policy
in Xen land is self-ballooning. Self-ballooning is a feedback-directed ballooning technique which uses information from within a guest OS to cause the guest to dynamically resize its own balloon. This is done aggressively so that a guest OS uses as little memory as it can, for example when it is idle;
实际上“enforced memory asceticism(苦行,禁欲)”,导致性能也一定好

ALL POLICIES SUCK HAVE ISSUES BECAUSE:
1) MEMORY PRESSURE IS DIFFICULT TO MEASURE
2) HARD TO PREDICT THE FUTURE IS (Yoda)

Solution Set C SUMMARY: So it might be the case that any policy is better than no
policy, but all policies are doomed to failure due to insufficient or inaccurate information
and due to lack of a crystal ball. So...


Solution Set D: if we are never going to succeed, maybe we should assume that
sometimes we will fail. So what can we do to plan better and to correct for or compensate for the inevitable failures that we know are going to happen? And that finally brings us to...
    Transcendent Memory

Why is tmem so good?: So this all looks pretty impressive. Why is it so good? Well
tmem is basically acting as a big shared page cache, shared across all of the guest OS's. If you know a branch of statistics called queuing theory, you'll understand that this is mathematically better than a set of smaller page caches. Then when dedup and compression are turned on, the size of this single shared page cache is effectively quadrupled for this workload. Further, if there were any swapping in this workload it would also be sharing the large pool of memory. What about that VCPU seconds increase though? Well,virtualization was first conceived as a way to more effectively utilize CPU cycles. Tmem does exactly that. It IS not only making better use of underutilized physical RAM, but it is also making use of previously unutilized CPU cycles! And... remember that this workload has all four guests pounding on the CPU and disk simultaenously. If those guests instead
utilize the machine more sparsely, tmem may see even better results.

Summary: So to summarize, tmem's primary objective is to make memory a more flexible
resource and to do it better than ballooning and other memory utilization technologies with fewer disadvantages. It certainly works well on this workload by showing a dramatic
reduction in disk reads, and a faster time-to-completion, while utilizing the CPU more
effectively. Disadvantages? Well, the OS must be made smart in a few places by adding a
few hooks that help it deal better with page cache evictions and swapping. Those are the
cleancache and frontswap patches posted to lkml. And one could argue that by using the
CPU more effectively, we're costing some power consumption, though that may be
compensated for by the reduction in disk accesses.
阅读(1053) | 评论(0) | 转发(0) |
0

上一篇:Transcendent Memory笔记

下一篇:机器配置

给主人留下些什么吧!~~