Chinaunix首页 | 论坛 | 博客
  • 博客访问: 490949
  • 博文数量: 140
  • 博客积分: 461
  • 博客等级: 下士
  • 技术积分: 878
  • 用 户 组: 普通用户
  • 注册时间: 2010-06-28 10:06
文章分类

全部博文(140)

文章存档

2016年(1)

2015年(6)

2014年(20)

2013年(1)

2012年(16)

2011年(96)

分类: 虚拟化

2014-04-17 10:25:29

文章转自:http://blog.chinaunix.net/uid-1838361-id-1753201.html

为了将host cpu的更多特性传递给guest:
1、 # virsh capabilities
将其中有关cpu的一段保存为xml
  1. <cpu>
  2.       <arch>x86_64</arch>
  3.       <model>core2duo</model>
  4.       <topology sockets='1' cores='4' threads='1'/>
  5.       <feature name='lahf_lm'/>
  6.       <feature name='sse4.1'/>
  7.       <feature name='dca'/>
  8.       <feature name='xtpr'/>
  9.       <feature name='cx16'/>
  10.       <feature name='tm2'/>
  11.       <feature name='est'/>
  12.       <feature name='vmx'/>
  13.       <feature name='ds_cpl'/>
  14.       <feature name='pbe'/>
  15.       <feature name='tm'/>
  16.       <feature name='ht'/>
  17.       <feature name='ss'/>
  18.       <feature name='acpi'/>
  19.       <feature name='ds'/>
  20.     </cpu>
 2、#virsh cpu-baseline host.xml 
  1. <cpu match='exact'>
  2.   <model>core2duo</model>
  3.   <feature policy='require' name='lahf_lm'/>
  4.   <feature policy='require' name='sse4.1'/>
  5.   <feature policy='require' name='dca'/>
  6.   <feature policy='require' name='xtpr'/>
  7.   <feature policy='require' name='cx16'/>
  8.   <feature policy='require' name='est'/>
  9.   <feature policy='require' name='vmx'/>
  10.   <feature policy='require' name='ds_cpl'/>
  11.   <feature policy='require' name='pbe'/>
  12.   <feature policy='require' name='tm'/>
  13.   <feature policy='require' name='ht'/>
  14.   <feature policy='require' name='ss'/>
  15.   <feature policy='require' name='acpi'/>
  16.   <feature policy='require' name='ds'/>
  17. </cpu>
这样的配置,等效于kvm官网上的qemu -cpu qemu64,+ssse3,+sse4.1这种方法。
可以通过查看启动guest的log验证:
/var/log/libvirt/qemu/guset.xml

3、将上面的输出粘贴到guest的xml文件的元素下面。

4、问题:tm,Thermal Monitor,温度监控这个特性造成guest的kernel pannic,去掉该特性。
tm2,Thermal Monitor 2,也不需要,去掉。
有关每个cpu feature的含义:

5、结果:
host的cpu flages
cat /proc/cpuinfo |grep flags|uniq 
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm constant_tsc pni monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr sse4_1 lahf_lm

默认的cpu flags,即默认只有qemu64
cat /proc/cpuinfo |grep flags|uniq 
 flags : fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx lm pni

增加了features的cpu flags
cat /proc/cpuinfo |grep flags|uniq 
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht syscall nx lm pni monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr sse4_1 lahf_l


一个问题:按照这样的方法增加cpu的feature后,分配给guset的cpu都会被占用,例如,分配给guest2个cpu,则,host上,qemu-kvm会一直占用200%的cpu。为什么呢?

参考:

Many of the management problems in virtualization are caused by the annoyingly popular & desirable host migration feature! I previously talked about PCI device addressing problems, but this time the topic to consider is that of CPU models. Every hypervisor has its own policies for what a guest will see for its CPUs by default, Xen just passes through the host CPU, with QEMU/KVM the guest sees a generic model called “qemu32″ or “qemu64″.  VMWare does something more advanced, classifying all physical CPUs into a handful of groups and has one baseline CPU model for each group that’s exposed to the guest. VMWare’s behaviour lets guests safely migrate between hosts provided they all have physical CPUs that classify into the same group. libvirt does not like to enforce policy itself, preferring just to provide the mechanism on which the higher layers define their own desired policy. CPU models are a complex subject, so it has taken longer than desirable to support their configuration in libvirt. In the 0.7.5 release that will be in Fedora 13, there is finally a comprehensive mechanism for controlling guest CPUs.

Learning about the host CPU model

If you have been following earlier articles (or otherwise know a bit about libvirt) you’ll know that the “virsh capabilities” command displays an XML document describing the capabilities of the hypervisor connection & host. It should thus come as no surprise that this XML schema has been extended to provide information about the host CPU model. One of the big challenges in describing a CPU models is that every architecture has different approach to exposing their capabilities. On x86, a modern CPUs’ capabilities are exposed via the  instruction. Essentially this comes down to a set of 32-bit integers with each bit given a specific meaning. Fortunately AMD & Intel agree on common semantics for these bits. VMWare and Xen both expose the notion of CPUID masks directly in their guest configuration format. Unfortunately (or fortunately depending on your POV) QEMU/KVM support far more than just the x86 architecture, so CPUID is clearly not suitable as the canonical configuration format. QEMU ended up using a scheme which combines a CPU model name string, with a set of named flags. On x86 the CPU model  maps to a baseline CPUID mask, and the flags can be used to then toggle bits in the mask on or off. libvirt decided to follow this lead and use a combination of a model name and flags. Without further ado, here is an example of what libvirt reports as the capabilities of my laptop’s CPU

  1. # virsh capabilities
  2. <capabilities>

  3.   <host>
  4.     <cpu>
  5.       <arch>i686</arch>
  6.       <model>pentium3</model>
  7.       <topology sockets='1' cores='2' threads='1'/>
  8.       <feature name='lahf_lm'/>
  9.       <feature name='lm'/>
  10.       <feature name='xtpr'/>
  11.       <feature name='cx16'/>
  12.       <feature name='ssse3'/>
  13.       <feature name='tm2'/>
  14.       <feature name='est'/>
  15.       <feature name='vmx'/>
  16.       <feature name='ds_cpl'/>
  17.       <feature name='monitor'/>
  18.       <feature name='pni'/>
  19.       <feature name='pbe'/>
  20.       <feature name='tm'/>
  21.       <feature name='ht'/>
  22.       <feature name='ss'/>
  23.       <feature name='sse2'/>
  24.       <feature name='acpi'/>
  25.       <feature name='ds'/>
  26.       <feature name='clflush'/>
  27.       <feature name='apic'/>
  28.     </cpu>
  29.     ...snip...

In it not practical to have a database listing all known CPU models, so libvirt has a small list of baseline CPU model names.  It picks the one that shares the greatest number of CPUID bits with the actual host CPU and then lists the remaining bits as named features. Notice that libvirt does not tell you what features the baseline CPU contains. This might seem like a flaw at first, but as will be shown next, it is not actually necessary to know this information.

Determining a compatible CPU model to suit a pool of hosts

Now that it is possible to find out what CPU capabilities a single host has, the next problem is to determine what CPU capabilities are best to expose to the guest. If it is known that the guest willnever need to be migrated to another host,  the host CPU model can be passed straight through unmodified. Some lucky people might have a virtualized data center where they can guarantee all servers will have 100% identical CPUs. Again the host CPU model can be passed straight through unmodified. The interesting case though, is where there is variation in CPUs between hosts. In this case the lowest common denominator CPU must be determined. This is not entirely straightforward, so libvirt provides an API for exactly this task. Provide libvirt with a list of XML documents, each describing a host’s CPU model, and it will internally convert these to CPUID masks, calculate their intersection, finally converting the CPUID mask result back into an XML CPU description. Taking the CPU description from a random server

  1. <capabilities>
  2.   <host>
  3.     <cpu>
  4.       <arch>x86_64</arch>
  5.       <model>phenom</model>
  6.       <topology sockets='2' cores='4' threads='1'/>
  7.       <feature name='osvw'/>
  8.       <feature name='3dnowprefetch'/>
  9.       <feature name='misalignsse'/>
  10.       <feature name='sse4a'/>
  11.       <feature name='abm'/>
  12.       <feature name='cr8legacy'/>
  13.       <feature name='extapic'/>
  14.       <feature name='cmp_legacy'/>
  15.       <feature name='lahf_lm'/>
  16.       <feature name='rdtscp'/>
  17.       <feature name='pdpe1gb'/>
  18.       <feature name='popcnt'/>
  19.       <feature name='cx16'/>
  20.       <feature name='ht'/>
  21.       <feature name='vme'/>
  22.     </cpu>
  23.     ...snip...

As a quick check is it possible to ask libvirt whether this CPU description is compatible with the previous laptop CPU description, using the “virsh cpu-compare” command

  1. $ ./tools/virsh cpu-compare cpu-server.xml
  2. CPU described in cpu-server.xml is incompatible with host CPU

libvirt is correctly reporting the CPUs are incompatible, because there are several features in the laptop CPU that are missing in the server CPU. To be able to migrate between the laptop and the server, it will be necessary to mask out some features, but which ones ? Again libvirt provides an API for this, also exposed via the “virsh cpu-baseline” command

  1. # virsh cpu-baseline both-cpus.xml
  2. <cpu match='exact'>
  3.   <model>pentium3</model>
  4.   <feature policy='require' name='lahf_lm'/>
  5.   <feature policy='require' name='lm'/>
  6.   <feature policy='require' name='cx16'/>
  7.   <feature policy='require' name='monitor'/>
  8.   <feature policy='require' name='pni'/>
  9.   <feature policy='require' name='ht'/>
  10.   <feature policy='require' name='sse2'/>
  11.   <feature policy='require' name='clflush'/>
  12.   <feature policy='require' name='apic'/>
  13. </cpu>

libvirt has determined that in order to safely migrate a guest between the laptop and the server, it is neccesary to mask out 11 features from the laptop’s XML description.

Configuring the guest CPU model

To simplify life, the guest CPU configuration accepts the same basic XML representation as the host capabilities XML exposes. In other words, the XML from that “cpu-baseline” virsh command, can now be copied directly into the guest XML at the top level under the element. As the observant reader will have noticed from that last XML snippet, there are a few extra attributes available when describing a CPU in the guest XML. These can mostly be ignored, but for the curious here’s a quick description of what they do. The top level element gets an attribute called “match” with possible values

  • match=”minimum” – the host CPU must have at least the CPU features described in the guest XML. If the host has additional features beyond the guest configuration, these will also be exposed to the guest
  • match=”exact” – the host CPU must have at least the CPU features described in the guest XML. If the host has additional features beyond the guest configuration, these will be masked out from the guest
  • match=”strict” – the host CPU must have exactly the same CPU features described in the guest XML. No more, no less.

The next enhancement is that the elements can each have an extra “policy” attribute with possible values

  • policy=”force” – expose the feature to the guest even if the host does not have it. This is kind of crazy, except in the case of software emulation.
  • policy=”require” – expose the feature to the guest and fail if the host does not have it. This is the sensible default.
  • policy=”optional” – expose the feature to the guest if it happens to support it.
  • policy=”disable” – if the host has this feature, then hide it from the guest
  • policy=”forbid” – if the host has this feature, then fail and refuse to start the guest

The “forbid” policy is for a niche scenario where a badly behaved application will try to use a feature even if it is not in the CPUID mask, and you wish to prevent accidentally running the guest on a host with that feature. The “optional” policy has special behaviour wrt migration. When the guest is initially started the flag is optional, but when the guest is live migrated, this policy turns into “require”, since you can’t have features disappearing across migration.

All the stuff described in this posting is currently implemented for libvirt’s QEMU/KVM driver, basic code in the 0.7.5/6 releases, but the final ‘cpu-baseline’ stuff is arriving in 0.7.7. Needless to say this will all be available in Fedora 13 and future RHEL. This obviously also needs to be ported over to the Xen and VMWare ESX drivers in libvirt, which isn’t as hard as it sounds, because libvirt has a very good internal API for processed CPUID masks now. Kudos to Jiri Denemark for doing all the really hardwork on this CPU modelling system!

阅读(5422) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~