Chinaunix首页 | 论坛 | 博客
  • 博客访问: 23630
  • 博文数量: 4
  • 博客积分: 0
  • 博客等级: 民兵
  • 技术积分: 70
  • 用 户 组: 普通用户
  • 注册时间: 2015-05-28 08:55
个人简介

大海中的白桦树

文章分类

全部博文(4)

文章存档

2018年(1)

2017年(3)

最近访客

分类: 嵌入式

2017-02-20 19:14:33

----------------------------------------------------------------


Ubuntu 14.04 + VMware workstation 12.1.0 build-3272444


1.用apt-get命令更新系统

$ sudo apt-get update
$ sudo apt-get upgrade


2.firefox下载VMware Workstation12.0.1


wget
bule@bsky:~/Downloads$ ls
VMware-Workstation-Full-12.1.0-3272444.x86_64.bundle


3.安装VMware


sudo apt-get install libcanberra-gtk-module:i386
-----------------------------------------------------------
安装这个包是避免以下
错误1:
Gtk-WARNING: Unable to locate theme engine in module_path "murrine" 

因为是警告信息,也不需要调试成功。


vi /etc/bash.bashrc
export GTK_PATH=$GTK_PATH:/usr/lib/x86_64-linux-gnu/gtk-2.0/modules/
source /etc/bash.bashrc 


错误2:
Gtk-Message: Failed to load module "canberra-gtk-module": libcanberra-gtk-module.so: cannot open shared object file: No such file or directory
因为能正常开启VMware的安装界面,所以当时也就忽略了这个问题,然而在开启VMware时提示
Could not open /dev/vmmon:No such file or directory.Please make sure that the kernel module 'vmmon' is loaded


后来在网上察了很久,终于找到了解决方案,共分为两步:
第一步
首先确定需要的module是否安装及其位置
bule@bsky:~/Downloads$ locate libpk-gtk-module.so
bule@bsky:~/Downloads$ locate libcanberra-gtk-module.so
/usr/lib/x86_64-linux-gnu/gtk-2.0/modules/libcanberra-gtk-module.so
/usr/lib/x86_64-linux-gnu/gtk-3.0/modules/libcanberra-gtk-module.so
bule@bsky:~/Downloads$ 


然后:

bule@bsky:~/Downloads$ vim /etc/ld.so.conf.d/
i686-linux-gnu.conf x86_64-linux-gnu_EGL.conf
libc.conf x86_64-linux-gnu_GL.conf
x86_64-linux-gnu.conf 


# vim /etc/ld.so.conf.d/x86_64-linux-gnu.conf


bule@bsky:/usr/lib$ cd /usr/lib/x86_64-linux-gnu/gtk-

gtk-2.0/ gtk-3.0/ 


/usr/lib/x86_64-linux-gnu/gtk-2.0/modules
/usr/lib/x86_64-linux-gnu/gtk-3.0/modules


然后重新加载下modules (一旦错误,就打开不了VMware 这些可以省略)
# ldconfig


# vmware-installer -l
Product Name Product Version 
==================== ====================
vmware-workstation 12.1.0.3272444 


第二步:
第二步是为了修正内核版本与VMware不匹配的问题
使用root用户依次执行以下命令
# service vmware stop
# rm /lib/modules/$(uname -r)/misc/vmmon.ko
# vmware-modconfig --console --build-mod vmmon /usr/bin/gcc /lib/modules/$(uname -r)/build/include/
# depmod -a
# service vmware start
成功!
--------------------------------------------------------
chmod a+x VMware-Workstation-Full-12.1.0-3272444.x86_64.bundle
执行脚本开始安装
sudo ./VMware-Workstation-Full-12.1.0-3272444.x86_64.bundle


4.安装提示点击下一步就好了




5.VMware安装win7时出现的情况和解决方法
--------------------------------------------------------
问题1: VMware加载不了win7
windows 7 install in the vmware need include the 3rd conditions
CPU 64 bit
CPU Support VT
BIOS open VT (在我的主机的BIOS中打开Intel 的VT支持)


问题2:用vmware创建一个虚机用于安装windows7 server, 给了60G的硬盘空间, 并在此启用hyper-v角色。网络为vmware的NAT模式。


因为是在虚拟机上又虚拟化,所以确保vmware虚拟机setting选项中的processors启用了Virtualize intel VT-x/EPT or ADM-V/RVT和Virtualize cpu performance counters这两个选项,, 参见:



bule@bsky:~/vmware/Windows 7$ ls
vmware-0.log vmware.log Windows 7.vmdk.lck Windows 7.vmxf
vmware-1.log Windows 7.nvram Windows 7.vmsd Windows 7.vmx.lck
vmware-2.log Windows 7.vmdk Windows 7.vmx
---------------------------------------
Windows 7.vmx添加下面两行。
hypervisor.cpuid.v0 = "FALSE"
mce.enable = "TRUE"
-------------------------------------
最后,关闭windows server 2012的防火墙


问题3:
在hyper-v管理器中创建一网桥br-int, 它报错:The virtual machine's operating system has attempted to enable promiscuous mode on adapter Ethernet0. This is not allowed for security reasons. Please go to the Web page "" for help enabling promiscuous mode in the virtual machine.


sudo groupadd vmwaregroup
sudo usermod -a -G vmwaregroup bule
sudo chgrp vmwaregroup /dev/vmnet8
sudo chmod g+rw /dev/vmnet8
-----------------------------------------------------
上述方法在系统重启后会消失,A more permanent fix is to edit /etc/init.d/vmware on the Host, by adding the lines in red:
# Start the virtual ethernet kernel service
vmwareStartVmnet() {
vmwareLoadModule $vnet
"$BINDIR"/vmware-networks --start >> $VNETLIB_LOG 2>&1
chgrp vmwaregroup /dev/vmnet*
chmod a+rw /dev/vmnet*
在上面设置了之后,当客户机里使用像wireshark之类的抓包工具时,它会将客户机的网卡设置为混杂模式,这时候vmware也会将vmnet8自动设置成混杂模式(ifconfig vmnet8 promisc)。


下面验证一下,netif5确实是IFF_PROMISC的了。netif5指内核的虚拟网卡设备, userif17是用户态实现的nat设备的字符设备接口, hub8.x是网桥中的一个端口。
# cat /proc/vmnet/hub8.0
connected netif5 tx 23
# cat /proc/vmnet/netif5
connected hub8.0 mac 00:50:56:c0:00:08 ladrf 00:00:00:00:00:00:00:00 flags IFF_RUNNING,IFF_UP,IFF_PROMISC devvmnet8
# cat /proc/vmnet/hub8.1
connected userif17 tx 0
# cat /proc/vmnet/userif17
connected hub8.1 mac 00:50:56:e3:d1:e0 ladrf 00:00:00:00:00:00:00:00 flags IFF_RUNNING,IFF_UP,IFF_BROADCAST read 0 written 0 queued 0 dropped.down 0 dropped.mismatch 20 dropped.overflow 0 dropped.largePacket 0
# cat /proc/vmnet/hub8.2
connected userif18 tx 0
# cat /proc/vmnet/userif18
connected hub8.2 mac 00:50:56:f6:3a:6b ladrf 00:00:00:00:00:00:00:00 flags IFF_RUNNING,IFF_UP,IFF_BROADCAST,IFF_ALLMULTI read 19 written 0 queued 19 dropped.down 0 dropped.mismatch 0 dropped.overflow 0 dropped.largePacket 0


--------------------------------------------------------------
说明一下,vmware并没有使用内核来实现nat, 因为ipforward=0, 它是由vmnet-natd来实转发的。
# ps -ef|grep vmnet-natd
root 9921 1 0 13:25 ? 00:00:00 /usr/bin/vmnet-natd -s 12 -m /etc/vmware/vmnet8/nat.mac -c /etc/vmware/vmnet8/nat/nat.conf
例如,VM的IP为172.16.138.128,而/etc/vmware/vmnet8/nat/nat.conf文件定义了NAT gateway address是172.16.138.2,也可定义DNAT规则例:8080=172.16.3.128:80
[host]
# NAT gateway address
ip = 172.16.138.2
[incomingtcp]
#8080 = 172.16.3.128:80
很显然,vmware是natd与VM的虚拟网卡进程进行TCP通讯时进行的NAT转换,也就是从/dev/vmnet8 ( vmnet8 equals br-tun) 中读出以太帧取出目的IP和协议,然后自己和远程通信。


下面是命令演示如何创建vmware桥:
vmnet-bridge -n 4 -i eth2 -d /var/run/vmnet-bridge-4.pid -1vmnet4
mknod /dev/vmnet4 c 119 4
vmnet-netifup -d /var/run/vmnet-netifup-vmnet4.pid /dev/vmnet4 vmnet4
ifconfig eth2 0.0.0.0 proimsc up
虚机中使用桥,将以下命令行内容添加到vmx文件:
ethetnet0.connectionType = "custom"
ethernet0.vnet = "vmnet4"
-----------------------------------------------------------------
这里老外有个建议,说是最好用bridge,而不是nat, 如下:
you'll be able to boot your Guest VM, and use Wireshark or whatever in the Guest. Just Remember! Your VM Guest's Network Adapter must be set to BRIDGED (connected directly to the physical network), not NAT (used to share the host's IP address).


------------------------------------------------
根据提示和日志
依次运行
sudo modprobe vmmon
sudo modprobe vmci
好了虚拟机成功起来了
网络无法链接
点connect之后 显示
Could not connect Ethernet0 to virtual network "/dev/vmnet8". More information can be found in the VMware.log file.
Failed to connect virtual device Ethernet0.


bule@bsky:~$ ls /usr/bin/vm*
/usr/bin/vmnet-bridge /usr/bin/vmware-installer
/usr/bin/vmnet-dhcpd /usr/bin/vmware-license-check.sh
/usr/bin/vmnet-natd /usr/bin/vmware-license-enter.sh
/usr/bin/vmnet-netifup /usr/bin/vmware-modconfig
/usr/bin/vmnet-sniffer /usr/bin/vmware-mount
/usr/bin/vmplayer /usr/bin/vmware-netcfg
/usr/bin/vmrun /usr/bin/vmware-networks
/usr/bin/vmss2core /usr/bin/vmware-ping
/usr/bin/vmstat /usr/bin/vmware-tray
/usr/bin/vmware /usr/bin/vmware-uninstall
/usr/bin/vmware-collect-host-support-info /usr/bin/vmware-usbarbitrator
/usr/bin/vmwarectrl /usr/bin/vmware-vdiskmanager
/usr/bin/vmware-fuseUI /usr/bin/vmware-vim-cmd
/usr/bin/vmware-gksu /usr/bin/vmware-vprobe
/usr/bin/vmware-hostd /usr/bin/vmware-wssc-adminTool
--------------------------


bule@bsky:~$ vmware-networks --help
vmware-networks version: 0.1
Usage: vmware-networks [--verbose | -v]
Use exactly one of these commands:
--postinstall ,,
--migrate-network-settings
--start
--stop
--status


Additional options:
--help | -h
--version


------------------------------------
使用sudo vmware-networks --start查看是否能够启动网络
bule@bsky:~$ sudo vmware-networks --start
Started Bridge networking on vmnet0
Enabled hostonly virtual adapter on vmnet1
Started DHCP service on vmnet1
Started NAT service on vmnet8
Enabled hostonly virtual adapter on vmnet8
Started DHCP service on vmnet8
Started all configured services on all networks


使用命令sudo modprobe vmnet
bule@bsky:~$ sudo modprobe vmnet
ok 点connect之后 显示网络链接
VM ----> setting ------> NAT
Edit ----> VME-------->bridged
------------------------------------------------------------------








参考网址:
http://blog.csdn.net/quqi99/article/details/8727130
http://blog.csdn.net/henulwj/article/details/50347489



关于vmware中的vlan
vmware中的vlan有三种方式, 可参见Vmware ESX Server 3 802.1Q解决方案:
1, VGT 即在虚机里就打了标签, 将端口组的vlan_id属性设为4095(相当于端口组就设为TRUCK了), 并且在虚拟机中运行802.1Q VLAN trunking驱动。
2,EST 在外部交换机打标签, 默认行为,端口组的vlan_id属性为0,相当于disable掉了端口组的tag功能 。
3,VST 在vmware的虚拟交换机上打标签, vmware用了端口组的概念,所以想定义一个vlan的话就要定义一个端口组,在端口组的vlan_id属性设值(1-4094)。然后虚机和端口组关联。
To configure a VLAN on the portgroup using the VMware Infrastructure/vSphere Client:
Click the ESXi/ESX host.
Click the Configuration tab.
Click the Networking link.
Click Properties.
Click the virtual switch / portgroups in the Ports tab and click Edit.
Click the General tab.
Assign a VLAN number in VLAN ID (optional).
Click the NIC Teaming tab.
From the Load Balancing dropdown, choose Route based on originating virtual port ID.
Verify that there is at least one network adapter listed under Active Adapters.
Verify the VST configuration using the ping command to confirm the connection between the ESXi/ESX host and the gateway interfaces and another host on the same VLAN.


Note: For additional information on VLAN configuration of a VirtualSwitch (vSwitch) port group, see Configuring a VLAN on a portgroup (1003825).


To configure via the command line:
esxcfg-vswitch -p "portgroup_name" -v VLAN_ID virtual_switch_name
参见文章:Sample configuration of virtual switch VLAN tagging (VST Mode) (1004074)





所以想要把esx设置为trunk模式的话(VGT),需要做两件事:


1)配置一个端口组, vlan_id关联为4095, 步骤如上。


2) 这种VGT模式还需要特定的虚机网卡驱动 ( 802.1Q VLAN trunking driver is required inside the virtual machine. )


见: Sample configuration of virtual machine (VM) VLAN Tagging (VGT Mode) in ESX (1004252)





3)将物理网卡配置成混杂模式promisc, 步骤是:


Log into the ESXi/ESX host or vCenter Server using the vSphere Client.
Select the ESXi/ESX host in the inventory.
Click the Configuration tab.
In the Hardware section, click Networking.
Click Properties of the virtual switch for which you want to enable promiscuous mode.
Select the virtual switch or portgroup you wish to modify and click Edit.
Click the Security tab.
From the Promiscuous Mode dropdown menu, click Accept.


加:


使用vmware的虚机来做vlan相关的网络实验时,注意使用e1000网卡,它默认用的是vmnet3是一个半虚拟化的网卡对vlan支持不好。包括两个层面,一是vmware处要设置虚机使用e1000网卡,二是虚机内部要安装e1000驱动(可用lspci |grep Eth命令查看网卡类型)。另外,使用VGT模式,即vmware的虚拟交换机要是trunk,端口组配置的vlan_id为4095






总结一下,这种VMware上的虚机做控制节点,一台物理机做计算节点的来做vlan实验时,要保证:


在虚机里面打tag (这样虚机内部的网卡驱动得支持vlan最好用e1000,并且打开8021q模块 modprobe 8021q),这样的话,也即hypervisor (VMware)的虚拟网桥与虚机相连的端口要支持TRUNK。




同理,如果是一个hyper-v上的虚机做控制节点,它自己这台物理机做计算节点来做vlan实验的话,原理同上。


1,hyper-v中给一个虚拟网卡的所在虚拟交换机上的端口设置TRUNK,


Get-VMNetworkAdapter -VMName "scem1-hvsce_0415"
Set-VMNetworkAdapterVlan -VMName "scem1-hvsce_0415" -Trunk -NativeVlanId 1 -AllowedVlanIdList 1-4094 -VMNetworkAdapterName "SCE_DATA_NIC2"


见:HOWTO: Fully virtualized lab using Hyper-V 3.0 and GNS3


2, modprobe 8021q


3, 至于hyper-v中的网卡驱动问题,好像无法安装e1000,


用命令(ethtool -i eth2 &&sudo modinfo hv_netvsc )你会发现hyper-v中的虚机使用的是windows自己的hv_netvsc网卡驱动。


这个网址说()hyper-v中的网卡分为emulated和synthetic两类,我的理解是前者是半虚拟化网卡,后者才是正常网卡。但要安装正常网卡,还需要安装一个额外的驱动Integration Service, windows xp已经带了这个驱动,但对于Linux需要自己下载的安装,最新版本是3.4,下载地址: , 要支持vlan得安装这个驱动。ibm的一个网页也是这么说的,%2Fcom.ibm.scp.doc_2.1.0%2Finstalling%2Fr_limits_hyperv.html
阅读(1915) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~