Chinaunix首页 | 论坛 | 博客
  • 博客访问: 2079281
  • 博文数量: 470
  • 博客积分: 10206
  • 博客等级: 上将
  • 技术积分: 5620
  • 用 户 组: 普通用户
  • 注册时间: 2008-07-03 12:50
文章分类

全部博文(470)

文章存档

2012年(1)

2011年(18)

2010年(47)

2009年(404)

分类:

2009-10-28 11:35:04

For people that have been running for a while with VMware, they know we always had 2 choices of NIC interfaces in our Virtual Machine. The vlance (AMD PCNET 32) nic and the vmxnet (VMware NIC). This disappeared as the current vlance card became ‘intelligent’, when you have VMware Tools running in your VM, automatically optimized code is being used to talk to the NIC interface. If VMware Tools is not loaded is just runs with basic AMD emulation.

Intel Pro 1000/mt

But today (I am still learning everyday) I found out we have a new choice. We can also have an Intel PRO 1000/MT in our Virtual Machines. I have tested it with VMware Workstation 5.5 and ESX3(beta2), but am very sure this also works in VMware Player and VMware Server.

So how do we get this Intel Pro 1000/MT card in our VM, well quite simple. Edit your .VMX file and make sure the Ethernet configuration has this line in it:

ethernet0.virtualDev = “e1000”

Under most products this works easily, except for ESX3(beta2). I found that in my ESX3/VC2 environment, some stupid engine in the background every time changes my configuration back. After some long frustrating testing, I found a solution. How? Well change the .VMX file with vi. Save it but leave VI open. Now power on the Virtual Machine. VI will keep the file locked, so ESX can not change it back  and voila.. it works.

So now the question of course is, why should I do this? Well in my case I started playing with this, because I had a virtual appliance that did network analysis. And the software company only had it configured for an Inter Pro 1000 NIC, so I had to make this change.

Now aware of this change, I was of course curious if there was a performance difference between the vlance and intel pro NIC. Well I am not allowed to publish benchmark tests, but I can tell you there is. But in different scenarios I had different winners. TCP pure network the vlance was faster, UDP pure network the Intel won by far. Doing file copying (so disk and network) the intel won frequent as well. So if you have a VM that really needs to squeeze every bit of power out of a NIC, go have a test and see which one works for you. Of course not only NIC speed should be considered in this equation, but also the difference in CPU usage.

I also think I know why the Intel Pro 1000 card was introduced in the VM. All 64bit Virtual Machines use this card by default. As there is a 64 bit driver available for this NIC and not for the Vlance card. But in my tests I was able to also put this NIC in my 32bit virtual machines.

Filed under:
  1. Scott Lowe
    1:05 am on

    Any ideas if this works on GSX Server 3.2.1?


  2. 7:23 am on

    I wasn’t able to get this working, even locking the VM with the vi editor. The VM still shows a VMware PCI adapter installed. I’m using ESX 2.5.2 with Patch 3 installed and W2K3SP1 on my VM.

  3. Administrator
    10:17 am on

    Just to clarify. The intel pro 1000 NIC is only available on VMware products that support running 64bit virtual machines. So not ESX 2.x or GSX 3.x.

  4. AnilM
    12:51 pm on

    Does this trick works on 32 bit Windows XP guest machine? I am running win XP guest machine on VMserver & Win2003 server host OS. After editing .vmx and starting the virtual machine, both nw adapters are removed.


  5. 12:30 pm on

    Thanks for the tip!

    Also, by installing Intel Pro 1000 MT drivers, you can get VGT support in Windows! Just make sure it’s a 64bit OS. I am testing it now, and I have verified it to work correctly. I just need to do some more testing before I can verify and recommend it for production environments!

    As it looks now, it misses some pings to one network, but it’s probably due to a bad route.

Sorry, the comment form is closed at this time.

阅读(1260) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~