全部博文(163)
分类: LINUX
2010-03-19 10:22:00
配置nimbus-tp2.2
配置nimbus server端和VMM节点的globus用户无需密码互登陆
在两台机器上分别用globus用户执行
[globus@wang136 ~]$ ssh-keygen 在wang136.hrwang.com
Generating public/private rsa key pair.
Enter file in which to save the key (/home/globus/.ssh/id_rsa):
Created directory '/home/globus/.ssh'.
Enter passphrase (empty for no passphrase): 直接回车
Enter same passphrase again: 直接回车
Your identification has been saved in /home/globus/.ssh/id_rsa.
Your public key has been saved in /home/globus/.ssh/id_rsa.pub.
The key fingerprint is:
cf:12:b9:e8:bd:9e:a7:
[globus@cloud ~]$ ssh-keygen 在cloud.jsgl.com
Generating public/private rsa key pair.
Enter file in which to save the key (/home/globus/.ssh/id_rsa):
Created directory '/home/globus/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/globus/.ssh/id_rsa.
Your public key has been saved in /home/globus/.ssh/id_rsa.pub.
The key fingerprint is:
在wang136.hrwang.com上执行
[globus@wang136 .ssh]$ scp /home/globus/.ssh/id_rsa.pub globus@cloud.jsgl.com:/home/globus/.ssh/authorized_key
在cloud.jsgl.com上执行
[globus@cloud .ssh]$ cat /home/globus/.ssh/id_rsa.pub >> /home/globus/.ssh/authorized_keys
[globus@wang136 .ssh]$ chmod 600 /home/globus/.ssh/authorized_keys
[globus@cloud .ssh]$ scp /home/globus/.ssh/authorized_keys globus@wang136.hrwang.com:/home/globus/.ssh/
The authenticity of host 'wang136.hrwang.com (172.20.86.136)' can't be established.
RSA key fingerprint is 16:c0:83:94:55:b9:fe:b9:ad:dd:c2:c8:a7:22:c6:48.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'wang136.hrwang.com,172.20.86.136' (RSA) to the list of known hosts.
globus@wang136.hrwang.com's password:
authorized_keys 100% 810 0.8KB/s 00:00
好了,现在从两台机器上分别ssh对方,看看成功与否。
)配置nimbus
以globus用户执行
[globus@wang136 ~]$ /usr/local/globus-
# ------------------------- #
# Nimbus auto-configuration #
# ------------------------- #
Using GLOBUS_LOCATION: /usr/local/globus-
Is the current account (globus) the one the container will run under? y/n:
y
Pick a VMM to test with, enter a hostname:
cloud.jsgl.com
----------
How much RAM (MB) should be allocated for a test VM on the 'cloud.jsgl.com' VMM?
2048
Will allocate 2048 MB RAM for test VM on the 'cloud.jsgl.com' VMM.
----------
Is the current account (globus) also the account the privileged scripts will run under on the VMM (cloud.jsgl.com)? y/n:
y
Does the container account (globus) need a special (non-default) SSH key to access the 'globus' account on the VMM nodes? y/n:
Please enter 'y' or 'n'
Does the container account (globus) need a special (non-default) SSH key to access the 'globus' account on the VMM nodes? y/n:
n
----------
Testing basic SSH access to globus@cloud.jsgl.com
Test command (1): ssh -T -n -o BatchMode=yes globus@cloud.jsgl.com /bin/true
Basic SSH test (1) working to globus@cloud.jsgl.com
----------
Now we'll set up the *hostname* that VMMs will use to contact the container over SSHd
Even if you plan on ever setting up just one VMM and it is localhost to the container, you should still pick a hostname here ('localhost' if you must)
*** It looks like you have a hostname set up: wang136.hrwang.com
Would you like to manually enter a different hostname? y/n:
n
Using hostname: wang136.hrwang.com
----------
Is your local SSHd server on a port different than 22? Enter 'n' or a port number:
n
Attempting to connect to: wang136.hrwang.com:22
Contacted a server @ wang136.hrwang.com:22
----------
Now we will test the basic SSH notification conduit from the VMM to the container
Test command (2): ssh -T -n -o BatchMode=yes globus@cloud.jsgl.com ssh -T -n -o BatchMode=yes -p 22 globus@wang136.hrwang.com /bin/true
Notification test (2) working (ssh from globus@cloud.jsgl.com to globus@wang136.hrwang.com at port 22)
----------
OK, looking good.
---------------------------------------------------------------------
---------------------------------------------------------------------
If you have not followed the instructions for setting up workspace control yet, please do the basic installation steps now.
Look for the documentation at:
-
----------
A sample installation command set can be provided for you here if you supply a group name. Group privileges are used for some configurations and programs.
What is a privileged unix group of globus on cloud.jsgl.com? Or type 'n' to skip this step.
globus
----------
*** Sample workspace-control installation commands:
ssh root@cloud.jsgl.com
^^^^ YOU NEED TO BE ROOT
wget
tar xzf nimbus-controls-TP2.2.tar.gz
cd nimbus-controls-TP2.2
mkdir -p /opt/workspace
cp worksp.conf.example /opt/workspace/worksp.conf
python install.py -i -c /opt/workspace/worksp.conf -a globus -g globus
*** (see 'python install.py -h' for other options, including non-interactive installation)
----------
Waiting for you to install workspace control for the account 'globus' on the test VMM 'cloud.jsgl.com'
After this is accomplished, press return to continue.
----------
Going to test container access to workspace control installation.
On 'cloud.jsgl.com', did you install workspace-control somewhere else besides '/opt/workspace/bin/workspace-control'? y/n:
n
Test command (3): ssh -T -n -o BatchMode=yes globus@cloud.jsgl.com /opt/workspace/bin/workspace-control -h 1>/dev/null
Workspace control test (3) working
----------
Testing ability to push files to workspace control installation.
We are looking for the directory on the VMM to push customization files from the container node. This defaults to '/opt/workspace/tmp'
Did you install workspace-control under some other base directory besides /opt/workspace? y/n:
n
Test command (4): scp -o BatchMode=yes /usr/local/globus-
transfer-test-file.txt 100% 73 0.1KB/s 00:00
SCP test (4) working
----------
Great.
---------------------------------------------------------------------
---------------------------------------------------------------------
Now you will choose a test network address to give to an incoming VM.
Does the test VMM (cloud.jsgl.com) have an IP address on the same subnet that VMs will be assigned network addresses from? y/n:
y
----------
What is a free IP address on the same subnet as 'cloud.jsgl.com' (whose IP address is '172.20.86.169')
192.168.1.2
----------
Even if it does not resolve, a hostname is required for '192.168.1.2' to include in the DHCP lease the VM will get:
client1
----------
What is the default gateway for 192.168.1.2? (guessing it is 172.20.87.254)
You can type 'none' if you are sure you don't want the VM to have a gateway
192.168.1.1
----------
What is the IP address of the DNS server that should be used by the VM? (guessing it is 202.106.0.20)
You can type 'none' if you are sure you don't want the VM to have DNS
202.106.0.20
----------
OK, in the 'make adjustments' step that follows, the service will be configured to provide this ONE network address to ONE guest VM.
You should add more VMMs and more available network addresses to assign guest VMs only after you successfully test with one VMM and one network address.
----------
*** Changes to your configuration are about to be executed.
So far, no configurations have been changed. The following adjustments will be made based on the questions and tests we just went through:
- The GLOBUS_LOCATION in use: /usr/local/globus-
- The account running the container/service: globus
- The hostname running the container/service: wang136.hrwang.com
- The contact address of the container/service for notifications: globus@wang136.hrwang.com (port 22)
- The test VMM: cloud.jsgl.com
- The available RAM on that VMM: 2048
- The privileged account on the VMM: globus
- The workspace-control path on VMM: /opt/workspace/bin/workspace-control
- The workspace-control tmpdir on VMM: /opt/workspace/tmp
- Test network address IP: 192.168.1.2
- Test network address hostname: client1
- Test network address gateway: 192.168.1.1
- Test network address DNS: 202.106.0.20
----------
These settings are now stored in '/usr/local/globus-
If you type 'y', that script will be run for you with the settings.
Or you can answer 'n' to the next question and adjust this file.
And then manually run '/usr/local/globus-
OK, point of no return. Proceed? y/n
y
*** Running /usr/local/globus-
# ------------------------------------------- #
# Nimbus auto-configuration: make adjustments #
# ------------------------------------------- #
Read settings from '/usr/local/globus-
----------
[*] The 'service.sshd.contact.string' configuration was:
... set to 'globus@wang136.hrwang.com:22'
... (it used to be set to 'REPLACE_WITH_SERVICE_NODE_HOSTNAME:22')
... in the file '/usr/local/globus-
----------
[*] The 'control.ssh.user' configuration was:
... set to 'globus'
... (it used to be set blank)
... in the file '/usr/local/globus-
----------
[*] The 'use.identity' configuration does not need to be changed.
... already set to be blank
... in the file '/usr/local/globus-
----------
[*] The 'control.path' configuration does not need to be changed.
... already set to '/opt/workspace/bin/workspace-control'
... in the file '/usr/local/globus-
----------
[*] The 'control.tmp.dir' configuration does not need to be changed.
... already set to '/opt/workspace/tmp'
... in the file '/usr/local/globus-
----------
[*] Backing up old resource pool settings
... created new directory '/usr/local/globus-
... moved 'pool1' to '/usr/local/globus-
----------
[*] Creating new resource pool
... created '/usr/local/globus-
----------
[*] Backing up old network settings
... created new directory '/usr/local/globus-
... moved 'private' to '/usr/local/globus-
... moved 'public' to '/usr/local/globus-
----------
[*] Creating new network called 'public'
... created '/usr/local/globus-
----------
NOTE: you need to MATCH this network in the workspace-control configuration file.
This configuration file is at '/opt/workspace/worksp.conf' by default
For example, you might have this line:
association_0: public; xenbr0; vif0.1 ; none; 192.168.1.2/24
... "public" is the name of the network we chose.
... "xenbr0" is the name of the bridge to put VMs in this network on.
... "vif0.1" is the interface where the DHCP server is listening in dom0 on the VMM
... and the network address range serves as a sanity check (you can disable that check in the conf file)
----------
Making sure 'fake mode' is off:
[*] The 'fake.mode' configuration was:
... set to 'false'
... (it used to be set to 'true')
... in the file '/usr/local/globus-
----------
Finished.
See 'NOTE' above.
添加xen虚机的网络地址
以globus用户执行
[globus@wang136 network-pools]$ pwd
/usr/local/globus-
[globus@wang136 network-pools]$ vi private1
# DNS server IP or 'none'
202.106.0.20
# hostname ipaddress gateway broadcast subnetmask
client2 192.168.1.2 192.168.1.1 192.168.1.255 255.255.255.0
client3 192.168.1.3 192.168.1.1 192.168.1.255 255.255.255.0
client4 192.168.1.4 192.168.1.1 192.168.1.255 255.255.255.0
client5 192.168.1.5 192.168.1.1 192.168.1.255 255.255.255.0
client6 192.168.1.6 192.168.1.1 192.168.1.255 255.255.255.0
)调整一些云的设置
以globus用户执行
[globus@wang136 nimbus-autoconfig]$ pwd
/usr/local/globus-
[globus@wang136 nimbus-autoconfig]$ ./cloud-admin.sh --add-dn "/O=Grid/OU=GlobusTest/OU=simpleCA-wang136.hrwang.com/OU=hrwang.com/CN=Hongrui Wang"
You do not have a file with one or more cloud configuration decisions at:
/usr/local/globus-
This needs to be set up in order to proceed with adding a DN.
Once it is set up, you will not have to answer the questions again.
Continue? y/n:
y
When you add users on a regular basis, do you want this script to adjust a local grid-mapfile? y/n:
y
What is the absolute path to that grid-mapfile?
/etc/grid-security/grid-mapfile
Will the new DN always map to the same account (typical for nimbus grid-mapfile)? y/n:
y
What is that account name?
nimbus
----------
When you add users on a regular basis, do you want this script to also create the appropriate directories at the cloud repository? y/n:
y
Is the repository on another node? y/n:
n
What is the base directory for new user directories? (like "/cloud")
/cloud
Does each cloud user have the same UNIX account? y/n:
n
Do you want to softlink a set of read-only files into the directory (starter images)? y/n:
y
You need to create a directory on the same node that contains the files to softlink new users to. What is the absolute path of that directory?
Please enter something.
You need to create a directory on the same node that contains the files to softlink new users to. What is the absolute path of that directory?
/cloud-backup
Problem: Could not find a proper GroupAuthz module definition in this file:
'//usr/local/globus-
... did you enable the group authorization module?
... see -h
[globus@wang136 nimbus-autoconfig]$ ./cloud-admin.sh --enable-groupauthz
Enabling group authorization module.
Enabled. Container restart is necessary.
[globus@wang136 nimbus-autoconfig]$ ./cloud-admin.sh --add-dn "/O=Grid/OU=GlobusTest/OU=simpleCA-wang136.hrwang.com/OU=hrwang.com/CN=Hongrui Wang"
Adding '/O=Grid/OU=GlobusTest/OU=simpleCA-wang136.hrwang.com/OU=hrwang.com/CN=Hongrui Wang'
Pick a group to add this DN to:
[01] - Group: 'TESTING'
- All-time max allowed usage: 300 minutes
- Max simultaneous reserved: 300 minutes
- Max simultaneous VMs: 5
- Max VMs in one group request: 1
[02] - Group: 'DEVELOPMENT'
- All-time max allowed usage: 20160 minutes
- Max simultaneous reserved: 20160 minutes
- Max simultaneous VMs: 5
- Max VMs in one group request: 5
[03] - Group: 'SCIENCE'
- All-time max allowed usage: unlimited
- Max simultaneous reserved: unlimited
- Max simultaneous VMs: 16
- Max VMs in one group request: 16
[04] - Group: 'SUPERUSER'
- All-time max allowed usage: unlimited
- Max simultaneous reserved: unlimited
- Max simultaneous VMs: unlimited
- Max VMs in one group request: unlimited
Choose a number:
04
SUCCESS
Added DN: '/O=Grid/OU=GlobusTest/OU=simpleCA-wang136.hrwang.com/OU=hrwang.com/CN=Hongrui Wang'
To group: 'SUPERUSER'
Group policies:
- All-time max allowed usage: unlimited
- Max simultaneous reserved: unlimited
- Max simultaneous VMs: unlimited
- Max VMs in one group request: unlimited
Access list: '/usr/local/globus-
Time: May 27, 2009 8:31:10 PM
---------------------------------------
Adding DN '/O=Grid/OU=GlobusTest/OU=simpleCA-wang136.hrwang.com/OU=hrwang.com/CN=Hongrui Wang'
... with user mapping 'nimbus'
... to grid-mapfile '/etc/grid-security/grid-mapfile'
Nothing to do, the DN is already added to '/etc/grid-security/grid-mapfile'
---------------------------------------
Base directory present: '/cloud'
Exiting, user's directory already present: '/cloud/d9383407'
注:其实我已经将第8步里的操作做过了。
创建云目录
下面要创建由nimbus client上传镜像的目录,这个目录就是客户端操作的路径。这一步要等到nimbus client部署完成后,才能进行。
[root@wang136 opt]# mkdir -p /cloud/d9383407
[root@wang136 opt]# chown -R nimbus:nimbus /cloud/d9383407/