Chinaunix首页 | 论坛 | 博客
  • 博客访问: 1359277
  • 博文数量: 343
  • 博客积分: 13098
  • 博客等级: 上将
  • 技术积分: 2862
  • 用 户 组: 普通用户
  • 注册时间: 2005-07-06 00:35
文章存档

2012年(131)

2011年(31)

2010年(53)

2009年(23)

2008年(62)

2007年(2)

2006年(36)

2005年(5)

分类: 虚拟化

2012-04-04 02:22:05

I will try to sumarize the step needed to run a HVM Vista machine on a Ubuntu 9.04 distribution, 2.6.31.5 Dom0 paravirt-ops kernel under Xen 3.4.1. I will note the few specifics pitfalls, expecially with the network under a Point to Point Ethernet Link (a Usb Modem), and some errors I encountered along the way, in the hope that this will help you save some time and better understand the Xen requirements.

  • Hardware Virtualization Processor requirement

A HVM guest needs support from the processor, in order for the virtualization software to catch unallowed machine instructions in guest space, as well as to redirect I/O access from the devices firmware to the correct guest address. through configuring some sort of IOMMU by the hypervisor software. To check if the virtualization support exists or has been activated on your CPUs do the following, while in Dom0 Ubuntu distribution:

# for a Intel CPU matmih@Express2:~$ grep vmx /proc/cpuinfo # or, for a AMD CPU matmih@Express2:~$ grep svm /proc/cpuinfo

If nothing is shown then most likely the virtualization features have not been enabled. So reboot, enter BIOS (F8 key on my laptop after initial boot image) and look for the options to enable virtualization. Note*: On my Intel T9600 dual core processor it was something like Intel Virtualization Support option in my Processor Preferences tab, that I had to enable and save, and it was disabled by default, but the option name may change depending on the BIOS type and version. If you are certain that your processor has hardware virtualization support and your BIOS does not show any options, you may want to upgrade your BIOS firmware, as Dom0 and Xen will not be aware of this feature if it cannot read it from the BIOS software, like the Dom0 kernel exposes though the /proc/cpuinfo filesystem.

  • Xen HVM configuration

Xen install has already provided 2 templates in /etc/xen directory, xmexample.hvm for running a HVM guest under a userspace Qemu emulator, and xmexample.hvm-stubdom for the guest I/O emulation to take place in a dedicate Qemu domain/guest. You can check out the comments for the options in the template files but I will comment on my selection. For my initial needs I create a HVM guest for a Windows Vista system, running under a userspace Qemu emulator, with 1 virtual CPU, 2 GB RAM, 1 Ethernet Link, USB enbled with 10GB of hard disk space, backed up by a dom0 ext3 file. Though I placed my xen_vista.cfg config file in ~/Work/XenImages directory, if you want Xen to automatically launch the domain when Dom0 boots, you can place the config or create a symbolic link in /etc/xen/auto directory. This auto directory Xend configuration option as well as other common domain options, expecially related to transitions between states, migrating and so on can be found in /etc/sysconfig/xendomains file. In the following sections I will describe and comment the format of my xen_vista.cfg HVM guest domain:

a) Initial Domain Builder

name=XenVista import os, re arch_libdir = 'lib' arch = os.uname()[4] if os.uname()[0] == 'Linux' and re.search('64', arch): arch_libdir = 'lib64' kernel = "/usr/lib/xen/boot/hvmloader" builder='hvm'

The inital options check what is the current library director(arch_libdir variable) on your Linux Dom0 distribution lib or lib64 to be used later on for getting the correct I/O device model. It also specifies the firmware the domain builder will use (the kernel option – you can find the sources for the HVM Qemu firmware in xen3.4.1/tools/firmware/hvmloader) as well which domain builder function the python utility xm tool to use for creating this guest domain – a HVM guest uses ‘hvm‘ entry specified by the builder parameter. You can check out the xen3.4.1/tools/python/xen/xm/create.py script for the configure_hvm function to see the creation flow: it will basically look for certain options in the config file (such as apic or acpi), it will copy the firmware in the guest domain space and it will launch Qemu emulation for the guest domain using the below specified config variables.

b) Guest layout – cpu, ram, controllers

vcpus = 1 cpus = "1" memory = 2048 pae = 0 acpi = 1 apic = 1

This will configure the firmware, through Xen hypercalls to let the guest see one virtual cpu, running on the first physical cpu (out of the available 2 since we are running on a Intel dual core processor), with 2GB of RAM (we let Xen choose the default size for the shadow_memory parameter required for the Xen hypervisor to keep in non swapped memory internal information for this domain like cached and active guest TLB tables, with no page address extension support (pae option – required for our 32 bit Vista kernel version), with the Advanced Control and Power Interface BIOS functions set and also with the Advanced Programmable Interrupt Controller so the guest will see something that has more capabilities than the default 8259 controller. This settings will reflect the internal configuration of the guest firmware/BIOS that will be configured by Xen through the means of hypercalls – instead of probbing the hardware the firmware will make hypercalls to Xen to check its capabilities.

c) Disk and Ethernet settings

vif = [ 'type=ioemu, ip=192.168.1.2' ] disk = [ 'tap:aio:/home/matmih/Work/XenImages/xen_vista.img,hda,w', 'phy:/dev/scd0,hdc:cdrom,r' ] boot = "dc"

The first entry creates a virtual Ethernet interface that the guest domain will have access. The type=ioemu parameter specifies to Xend tool that this interface is not a netfront paravirtualized ethernet driver but will be emulated by Qemu itself. This means that when Qemu is started by Xend it’s commnad line will contain “-net nic,vlan=0 -net tap,vlan=0,ifname=tap1.0″ meaning that a tap virtual Ethernet adapter will be created in the Dom0 Linux and will be liked by the Qemu Emulation to the actual guest ethernet interface. In order to configure the newly created tap1.0 adapter /etc/xen/qemu-ifup script will be called. You can always check the /var/log/xen/xend.log and /var/log/xen/qemu-dm-{domain_name}.log to see the actual command passed to Qemu. The ip parameter is the ip address that the guest system will use for it’s static network configuration, and will by used localy, in Dom0 Linux, to set up routes to the tap1.0 interface, depending on the network type chosen for the virtual machines (I will elaborate more in the networking section

Now when it comes to the disk drives that the guest sees things become more complicated. Qemu emulates basic IDE hardware that the guest Vista will configure. What we can configure in xen_vista.cfg domain file is what is the backend support for what Qemu is emulated. It can be a physical mounted drive (parameter phy), and Qemu emulator binaries will forward the emulated disk I/O the requests to the native device drivers of the Dom0 Linux, or it can be a file-backed VBD (Virtual Block Device). In order for Qemu to use this storage type, that has the advantage of being more flexible (can even be a network file), but can have performance penalities for intersive I/O requests, the must be a driver loaded in the Dom0 Kernel that supports this functionality. At the moment there are 2 different ways this can be done:

i) Loopback driver supporting raw files

file:/home/matmih/Work/XenImages/xen_vista.img,hda,w'

Has the advantage of the loopback driver being precompiled in most kernels, but will have a default number of suppported loopdevices of 8 (mounted in dev/loop*) and it is known to buffer a lot and be quite slow for heavy I/O workloads. You may also find the below commands usefull:

# to manually mount the disk image in Dom0 Linux mount -o loop /home/matmih/Work/XenImages/xen_vista.img /mnt/VistaHda # to create an additional loop device mknod -m660 /dev/loopNew b 7 8

ii) Blktap driver supporting raw files or qcow Qemu images

tap:aio:/home/matmih/Work/XenImages/xen_vista.img,hda,w' # for raw images tap:qcow:/home/matmih/Work/XenImages/xen_vista.qcow,hda,w' # for Qemu images

Even though it has to be expecially ported to a Xen kernel and recompiled it offers higher performance than the loopback driver, is more scalable and in strongly recomended by the Xen team. You can always use this driver to mount the image to your Dom0 Linux filesystem as well:

xm block-attach 0 tap:aio:/home/matmih/Work/XenImages/xen_vista.img /dev/blkVista # to create a blktap device node for a raw image xm block-attach 0 tap:qcow:/home/matmih/Work/XenImages/xen_vista.qcow /dev/blkVista # to create a blktap device node for a qcow image # to actual mount the device to a filesystem location mount /dev/blkVista /mnt/VistaHda

d) Device emulation

device_model = '/usr/' + arch_libdir + '/xen/bin/qemu-dm' sdl = 1 vnc = 0 opengl = 0 stdvga = 0 monitor = 1 usb = 1 usbtablet = 1 serial = 'pty'

The last part of the domain configuration file contains the specific device options that are passed to Qemu for I/O emulation. The device_model options indicates the Qemu process binary to be launched. In order to have the ability to talk to Qemu itself we we’ll launch the Vista domain in a SDL window (libsdl package required), instead of a VNC client.Both SDL and VNC work very well in terms of displaying Windows in a graphical console, although VNC has some distinct advantages over SDL. Firstly, VNC provides greater flexibility than SDL in terms of remote access to the domainU graphical console. With VNC it is possible to connect to the graphical console from other systems, either on the local network or even over the internet. Secondly, when you close a VNC viewer window the guest domain continues to run allowing you to simply reconnect to carry on where you left off. Closing an SDL window, however, immediately terminates the guest domainU system resulting in possible data loss. In the SDL window in order to switch from the Vista domain view to the Qemu monitor console (monitor=1 option) use Ctrl + Alt + 2, and you can issue the commands best describe in the . We also chose to emulate a cirrus logic device (stdvga=0), instead of a vesa one, and did not enable any graphical accelaration in the SDL window (opengl=0) because I am missing the flgrx module described in another post, correponding to the ATI drivers for my video card. Alse we ask Qemu to emulate a UHCI controller (usb=1) to the guest Domain in order to have the ability to add new UsB devices, and we also emulate an USB mouse for guest user input requirements (usbtablet=1), a PS2 keyboard being already emulated by default.

  • Virtual machines networking

In order to have a working Ethernet interface on a Xen powered virtual machine one must first customize how the network will look like between the virtual machines and the dom0 priviledged host, and how the outgoing network interface(s) will be used. Xen ca be configured to allow its virtual machines to reuse an existing network and the IP addresses (bridged networking) or to create a new private network for internal guest use (nat and routed networking). The 3 types of network configuration templates have already pre-customized scripts in /etc/xen/scripts directory and either configuration can be selected from the /etc/xen/xend-config.sxp config file prior to starting xend utility. I will briefly describe the 3 configurations and will describe the problems I ecountered and overcame for configuring HVM’s network to work with my Point-to-Point Ethernet Link for my USB Modem. One thing to mention is that the network-script entry in the xend configuration file is called once when xend is started to create the necessary bridges and configure the needed forwarding rules for the private virtual network, if any, and the vif-script line calls the script each time a new virtual machine is created to configure the newly added network interface in priviledged guest that coresponds to the virtual machine interface. One thing to note is that when type=ioemu is specified in the vif configuration the Qemu emulator will bring up a tap virtual adapter connected to the emulated guest domain Ethernet interface and /etc/xen/qemu-ifup script will be called to configure the tap interface (the first parameter will be the newly created tap name and the second one the bridge option specified in the vif domain config). A very good description of the networking explained below can also be found at .

a) Bridge Networking

This will allow the Virtual Machines to be in the same network as the bridged outgoing interface, so all guests will have an IP address in the ip and subnet mask of the original network interface. This will require bridge-utils package as a new Ethernet bridge is created for theoriginal interface and the virtual interfaces created and added to each virtual machine will be added as well to the bridge. The default bridge name is xenbr0 and the default outgoing interface is eth0 but the name can be overwritten from the xend-config.sxp:

# /etc/xen/xend-config.sxp (network-script network-bridge) (vif-script vif-bridge) # ~/Work/XenImages/xen-vista.cfg vif = [ 'ip=192.168.0.6' ]

Because the network-bridge script tries to take down (ifdown) my ppp0 interface in order to create a temporary bridge and rename that bridge to the default xenbr0 name, the initial script failed and I was left only with an unconfigured tmpbridge interface (as in ifconfig command display), instead of the default xenbr0. Also the following message appeared when I tried to start xend:

matmih@Express2:~/Work/XenImages$ sudo xend start ifdown: interface ppp0 not configured RTNETLINK answers: Device or resource busy

To correct this error I would have to manually edit the network-bridge script, which seemed to be a little too complicated and was not worth the time for my initial needs. So basically network-bridge script is run when xend starts and will create a xenbr0 bridge adding the outgoing interface to that bridge. The vif-bridge is run when a new paravirtualied backend net interface is added to Dom0 (vifx.x name) and will be configured with the provided ip (must be in the same network as the outgoing interface network) and will be added to xenbr0 bridge as well. This vifx.x backend is used for the paravirtualized frontend found the the guest domain. For HVM domains the newly created tapx.x virtual net adapter will be added to the bridge in qemu-ifup script and there is no need for a vifx.x interface to be created.

b) Routed Networking with NAT

This allows the Virtual Machines to be in a private network with and comunicate with each other through Dom0 exposed private network for the configured backend vifx.x interfaces. Trafic from the vifx.x iterfaces with be NATed to the outgoing interface, meaning that the virtual machines ip address will be hidden with the outgoing interface ip address.

# /etc/xen/xend-config.sxp (network-script network-nat) (vif-script vif-nat) # ~/Work/XenImages/xen-vista.cfg vif = [ 'ip=10.0.0.2' ]

The first script will configure a NAT rule for all ip addresses that go to the outgoing interface. To see the actual NAT rule do the following:

iptables -nvL -t nat ; route -n

The vif-nat script will add a random private network ip address to the vifx.x interface. Even if the ip address differs from the assigned ip address in the domain config, you should use the one in the domain file when configuring the guest domain interface, as the routes for 10.0.0.2 ip address are the one being configured in Dom0 routing tables. One thing to check is that forwarding has been enabled in Dom0 Linux, so uncomment the following line in your sysctl file, if any:

# /etc/sysctl.conf net.ipv4.conf.default.forwarding=1 root@Express2$~> sysctl -p

c) Two-way Routed Network

As in the NAT configuration the Virtual Machines are in a private network and can access each other of the outgoing network. The only difference is that there ip is no lomger NATed and hidden with the Dom0 ip address. So any outgoing packet of the guests will have the private network address, and any hosts on the public outgoing network can access the Virtual Machines with the private ip by adding a routing rule to the private guest network on the default gateway of the public network. This is done in vif-route script by configuring the new vifx.x network interface backend in Dom0 for a new guest to an ip address in the public network range and configuring a route for the actual 10.0.0.2 private guests network to go on vifx.x interface.

# /etc/xen/xend-config.sxp (network-script network-route) (vif-script vif-route) # ~/Work/XenImages/xen-vista.cfg vif = [ 'ip=10.0.0.2' ]

d) Custom solution for HVM guests

Unfortunatly scenario a) does not work for my Point-to-Point USB modem connection and b) and c) scripts are broken for a Qemu HVM guests due to the fact the Qemu itselfs brings up a new tap virtual adapter interface (tapx.x) to emulate the guest’s Ethernet and the scripts bring a a vifx.x backend to work with a paravirtualized net fronted driver found in a paravirtualized guests. To work around this issue and keep the NAT’ed configuration for xend tool, in order to be able to add additional paravirtualized guests I will manually configure my tapx.x interface to work with my USB modem, making the vifx.x backend unecessary. First let’s see my current network configuration:

matmih@Express2:~$ ifconfig lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) ppp0 Link encap:Point-to-Point Protocol inet addr:10.81.110.227 P-t-P:10.64.64.64 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:14 errors:0 dropped:0 overruns:0 frame:0 TX packets:15 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:3 matmih@Express2:~$ route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.64.64.64 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 ppp0 0.0.0.0 10.64.64.64 0.0.0.0 UG 0 0 0 ppp0

We will keep the NAT’ed configuration described in b) with the following changes:

# /etc/xen/xend-config.sxp (network-script 'network-nat netdev=ppp0') (vif-script vif-nat) # ~/Work/XenImages/xen-vista.cfg vif = [ 'type=ioemu, bridge=192.168.1.254/24' ]

The xend-config.sxp file changes the default eth0 interface to my ppp0 USB Modem Point-To-Point Interface. This will be used by the /etc/xen/scripts/network-nat script to add the following NAT rule for all ip addresses that go through that interface:

iptables -t nat -A POSTROUTING -o ppp0 -j MASQUERADE

Next one must modify the /etc/xen/qemu-ifup script that is called for the newly created tapx.x virtual tap adapter created by Qemu because of the above type=ioemu vif option:

#!/bin/sh # $1 - tapx.x name # $2 - bridge domain config vif option (vif = ['type=ioemu, bridge=$2']) echo 'config qemu network with xen interface ' $* # configure the tapx.x interface to have the ip provided in the bridge option ip link set "$1" up arp on ip addr add $2 dev "$1" # add a route for the Qemu private network to go to the tapx.x interface ip_only=echo $2 | awk -F/ '{print $1}' route add $2 dev $1 src $ip_only # make the tapx.x interface rewrite the MAC address for the forwarded virtual machines packages # this will make tapx.x interface act as a gateway echo 1 >/proc/sys/net/ipv4/conf/$1/proxy_arp # add the iptables rules, in case firewall is enabled, to allow all connection in/out of the tapx.x interface iptables -I FORWARD -m physdev --physdev-in "$1" -j ACCEPT 2>/dev/null iptables -I FORWARD -m state --state RELATED,ESTABLISHED -m physdev --physdev-out "$1" -j ACCEPT 2>/dev/null

The ideea behind the settings is that the tapx.x interface acts as a gateway for the Virtual Machine network and all packets that go to the outgoing network interface, ppp0, will be NAT’ed. This means that the guest configuration can have any ip address in the private network range and must have the tapx.x interface address set up as a gateway:

# the configuration of my HVM Vista Guest: C:\Users\VistaOnXen>ipconfig Windows IP Configuration Ethernet Adapter Local Area Connection: ip 192.168.1.2 netmask 255.255.255.0 gateway 192.168.1.254 DNS 192.230.161.3, 193.230.161.4# the configuration of my Ubuntu Dom0 priviledged guest after the vista guest booted matmih@Express2:~$ cat /etc/resolv.conf # Generated by NetworkManager nameserver 193.230.161.3 nameserver 193.230.161.4matmih@Express2:~/Work/XenImages$ ifconfig ppp0 Link encap:Point-to-Point Protocol inet addr:10.81.110.227 P-t-P:10.64.64.64 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:8440 errors:0 dropped:0 overruns:0 frame:0 TX packets:8180 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:3 RX bytes:8675386 (8.6 MB) TX bytes:1711305 (1.7 MB) tap1.0 Link encap:Ethernet HWaddr f2:be:7d:4d:a1:65 inet addr:192.168.1.254 Bcast:0.0.0.0 Mask:255.255.255.0 inet6 addr: fe80::f0be:7dff:fe4d:a165/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:217 errors:0 dropped:0 overruns:0 frame:0 TX packets:34 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:41115 (41.1 KB) TX bytes:4905 (4.9 KB)matmih@Express2:~$ route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.64.64.64 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 tap2.0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 ppp0 0.0.0.0 10.64.64.64 0.0.0.0 UG 0 0 0 ppp0matmih@Express2:~$ sudo iptables -nvL Chain FORWARD (policy ACCEPT 20 packets, 1569 bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 PHYSDEV match --physdev-in tap1.0 0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED PHYSDEV match --physdev-out tap1.0 matmih@Express2:~$ sudo iptables -nvL -t nat Chain POSTROUTING (policy ACCEPT 7 packets, 1046 bytes) pkts bytes target prot opt in out source destination 457 27563 MASQUERADE all -- * ppp0 0.0.0.0/0 0.0.0.0/0

One thing to note that some other entries may be for the vif1.0 entry but this interface is not realy used. I will be brought down and the iptables rules for it removed when the domain is destroyed, but the tap1.0 iptables rules will have to be removed by hand.

  • Xen Troubleshooting

In this last chapter I will summarize some errors I encountered along the way and the cause and solutions I found. First one thing to know about are the log files that must be checked when things go wrong:

/var/log/qemu-dm-{DomainName}.log - contains the Qemu emulator logs, including errors from /etc/xen/qemu-ifup script /var/log/xend.long - xend python tool logs, xm logs /var/log/syslog - usually different kernel messages + errors from the /etc/xen/scripts/* scripts sudo xm dmesg | grep VMX - the xm errors for your HVM deployment udevadm monitor - will output the UDEV events received by the udevd daemon

1) Graphical issues

i) First if you intend to use opengl = 1 option in your HVM domain configuration file for your SDL display you must make sure that opengl is corectly configured on your Dom0 system (mine was not) :

matmih@Express2:~/Work/XenImages$ glxgears X Error of failed request: BadRequest (invalid request code or no such operation) Major opcode of failed request: 135 (GLX) Minor opcode of failed request: 19 (X_GLXQueryServerString) Serial number of failed request: 14 Current serial number in output stream: 14

ii) The domain could not be launched if the xm script was run as root because it could not open an SDL window. This was probably related to the fact that there was no X server configured for user root, so you should always run as a normal user with sudo rights

matmih@Express2:~/Work/XenImages$ sudo xm create xen_vista.cfg

2) ACPI/APIC issues

Initially I tried to install Vista using 2 virtual CPUs. I all ran OK untill, some minutes in the system CD installation I got the following error:

The only solution I could find was to limit to 1 VCPU running on the first physical CPU (vcpus = 1, cpus = “1″).

3) I/O Devices issues

i) Loopback problems – when using the loopback driver to set up a virtual disk backed by a filesystem file (using the domain’s configuration vbd:file:/ option) I got the following error:

Error: Device 768 (vbd) could not be connected. Failed to find an unused loop device

To correct the issue you can either create additional loopback devices or to modify the default number of loopbacks and restart the system:

mknod -m660 /dev/loopXXX b 7 8 # add the below line in /etc/modprobe.d/local-loop - we'll only work for the loopback driver not being compiled in the kernel options loop max-loop=64

ii) Networking issues – one of the most common error when trying to create a HVM domain is the following:

Error: Device 0 (vif) could not be connected. Hotplug scripts not working /sbin/hotplug need to be on your computer

The error can be caused by many things. It is a xend python tool error that after bringing the vif paravirtualized backend for the virtual domain, it’s hotplug-status is not updated. Usually the Xen Store entry is updated by the Xen’s virtual interface configuration scripts (vif-bridge, vif-nat, vif-routes) only if there was no error detected, set by the success python function. But apparently those scripts were not even called when I tried to create a HVM guest, or any guest for that matter. This scripts should be called by udev configuration, discussed in an earlier post.

To try to debug what was happening, apart from viewing xen and qemu logs, one thing that one must do is check that all devices are brought up correctly. To do that I used udev logs, for example this is a sample of a correct log, or at least one that works, for booting a HVM Vista guest, with bridged networking:

root@Express2:/var/log/xen# udevadm monitor monitor will print the received events for: UDEV - the event which udev sends out after rule processing KERNEL - the kernel uevent KERNEL[1258317695.583288] add /devices/vbd-14-768 (xen-backend) KERNEL[1258317695.603290] add /devices/vbd-14-5632 (xen-backend) KERNEL[1258317695.625653] add /devices/vif-14-0 (xen-backend) UDEV [1258317695.626504] add /devices/vif-14-0 (xen-backend) KERNEL[1258317695.724992] add /devices/virtual/net/tap14.0 (net) UDEV [1258317695.764755] add /devices/virtual/net/tap14.0 (net) KERNEL[1258317695.882795] add /devices/console-14-0 (xen-backend) UDEV [1258317695.883452] add /devices/console-14-0 (xen-backend)

In my case I could not see the vif backend added to the system, the add /devices/vif entry. Finally I discovered that the Network backend driver was not compiled in the kernel, so you must take care with the default settings of the paravirt-ops kernel distribution default configuration:

matmih@Express2:~/Work/linux-2.6-xen$ make menuconfig # and enable the following # Device Drivers ---> # [*] Backend driver support ---> # [*] Block-device backend driver # [*] Xen backend network device and recompile and install the kernel as in my previous po

4) Xend debugging

One other thing that you can do, before really searching on is to dump the Xen Store database to see which devices has been added to it. Most of the xend’s programming logic is based on pooling entries from this Xen Store. Usually, to check if a device has been successfully added to guest machine layout, xend is looking for the hotplug-status information associated with adding a device. For example when successfully booting a Vista HVM guest I can see the vif backend added to the Domain 0 Ethernet devices in Xen Store:

/local/domain/0/backend/tap/1/768/hotplug-status=connected

In order to dump the Xen Store information I am using the following script:

#!/bin/bash function dumpkey() { local param=${1} local key local result result=$(xenstore-list ${param}) if [ "${result}" != "" ] ; then for key in ${result} ; do dumpkey ${param}/${key} ; done else echo -n ${param}'=' xenstore-read ${param} fi } for key in /vm /local/domain /tool ; do dumpkey ${key} ; done

I hope this article will speed up your guest HVM deploying. You can find more information, and probably more accurately on Xen page. In the following blogs I will present how this HVM guest setup has helped me with what I wanted to do for my master paper.

阅读(2189) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~