Chinaunix首页 | 论坛 | 博客
  • 博客访问: 92669402
  • 博文数量: 19283
  • 博客积分: 9968
  • 博客等级: 上将
  • 技术积分: 196062
  • 用 户 组: 普通用户
  • 注册时间: 2007-02-07 14:28
文章分类

全部博文(19283)

文章存档

2011年(1)

2009年(125)

2008年(19094)

2007年(63)

分类: LINUX

2008-05-02 20:52:11

文章分类:

这批文章介绍的是一款名为Conga的工具,该工具包含在最新发布的RHEL 5中,通过使用Conga,你可以非常方便的配置和管理好你的服务器集群和存储阵列,下面是该软件的一些特性预览。

This article introduces Conga, a new application released as part of Red Hat® Enterprise Linux® 5. Walk through setting up your cluster and storage systems properly. Preview some of the major features of this new clustering and storage management tool.

The problem: Multiple cluster and storage system management tools

In the past, in order to create and manage complex clusters and storage systems you had to make use of multiple managements tools. Conga solves the problem of individually installing and managing storage systems and cluster nodes.

Conga grew out of the lineage of GUI applications developed to support storage and clusters. Tools such as red-hat-cluster, system-config-cluster, deploy-tool, and system-config-lvm are examples of Conga’s heritage. “Conga addresses a number of ‘pain points’ customers encountered in our earlier X/Gtk management applications,” says Kevin Anderson, Director of Storage and Cluster Development. “Users frequently commented that while they found value in the interfaces we were providing, they did not routinely install X and Gtk libraries on their production servers. Conga solves this problem by providing an agent that is resident on the production servers and is managed through a web interface.”

Specifically, Conga provides:

  • A single web interface for all cluster and storage management tasks
  • Automated deployment of cluster data and supporting packages
  • Easy integration with existing clusters
  • Integration of cluster status and logs
  • Fine-grained control over user permissions

We’ll take a look at each of these features as we walk through some basic Conga tasks later in this article. To start, let’s take a look at just how Conga works.

Architecture overview: When Luci met Ricci

The following diagram illustrates the Conga architecture.

Fig.1. The Conga architecture.

The elements of this architecture are:

  • luci is an application server which serves as a central point for managing mulitple clusters.. luci maintains a database of node and user information. Once a system running ricci authenticates with a luci server, it will never have to re-authenticate unless the certificate used is revoked. You’ll typically have only one luci server for your clusters.
  • ricci is an agent that is installed on all servers being managed.
  • the web client is typically a browser, like Firefox, running on a machine in your network.

The interaction is as follows. Your web client securely logs into the luci server. Using the web interface, the administrator issues commands which are then forwarded to the ricci agents on the nodes being managed.

Starting Conga

Once luci is installed (an Enterprise Linux 5 package), its database must be initialized and an admin account must be setup. These tasks are accomplished with the luci_admin command line utility. After this, the luci service should be started:

service luci start
Starting luci:                                             [  OK  ]

Please, point your web browser to  to access luci

And that’s all there is to it. Let’s log into luci and look around. The administrative login window looks like this:

Fig.2. The luci administrative window.

Once you’ve logged in, you are at the “homebase” tab of the administrative web interface. From this tab, you can add clusters, storage systems, and users. The other tabs support administrative tasks specific to either clusters or storage systems.

Fig.3. The homebase tab.

Let’s walk through using Conga to perform some basic tasks. Most likely, you’ll start using Conga by administering an existing cluster. This is where we’ll start.

Conga and clusters: Accessing an existing cluster

To illustrate this task, we’ll start with an existing cluster. Note that we’ll need all the nodes of the cluster to be up and running when we start as luci will not let us attempt to manage part of an existing cluster, or a cluster with nodes that are unreachable.

We start by accessing the “homebase” tab in the web interface and then selecting the “add an existing cluster” option. This brings us to the following display. We enter the name of one node in the cluster. Luci will access this node and retrieve the cluster information. Note that if the passwords are the same for all the nodes, we can tell luci to authenticate with the password that we entered for this node.

Fig.4. Adding an existing cluster.

When we press “submit,” luci queries the nodes in the cluster and creates the following display. Note that the initial node that we entered is authenticated.

Fig.5. Initial node creation.

After we complete the form by entering the passwords for the other nodes, pressing “add this cluster” will result the cluster being added to luci’s database. We can then view the cluster.

Fig.6. Viewing the new cluster.

Note that if there was a problem on a cluster node (for example, one of the cluster daemons was not running), that node would be displayed in red text.

The process to create a new cluster with Conga is similar. We’ll take a look at this next.

Conga and clusters: Building a new cluster

When building a new cluster with Conga, the nodes have to meet some of the same requirements as nodes of an existing cluster that is being administered by Conga. For example, all of the nodes must be running the ricci agent software, must be reachable, and must be able to authenticate via the root password. If some of nodes that you want to configure to be in your new cluster are unreachable, however, you can create the cluster without them and add them to the cluster later when they are available.

The first step in creating the new cluster is to define the cluster’s name and select the nodes that will comprise the cluster. To do this, you select “Create a New Cluster” from the cluster tab and fill in this form:

Fig.7. Creating a new cluster.

The options listed in this form are:

  • Download packages from Red Hat Network® – If you select this option, when Conga creates the cluster all the packages necessary to support the cluster (rgmanager, cman, etc.) are automatically downloaded and installed. This is a helpful feature in that the cluster nodes will have the most recent version of the packages you install. Using this option also ensures that all the cluster nodes are running the same version of the cluster packages.
  • Use locally installed packages – Like the name says, selecting this option does not download the cluster packages.
  • Enable Shared Storage Support – Selecting this option downloads and installs the clustered volume manager (CLVM) packages, installs them, and sets up the lvm.conf file on the cluster nodes.
  • Check if cluster node passwords are identical – This option can save you some typing, and help you avoid typos. If you select this option, the password fields for all cluster nodes other than the first node are disabled and the password that you enter for the first node is used by Conga for all the cluster nodes.

After you fill in the form and press ‘Submit’, Conga begins the cluster creation process. While the cluster’s nodes are being updated, you’ll see a dynamic display like the following. When the cluster creation is complete, you see the same display as is seen in Fig.6.

Fig.8. Dynamically updated display of cluster node status.

Now, let’s take a look at managing storage with Conga.

Conga and storage systems

In addition to managing your clusters, Conga can manage storage systems–all though the same web interface. In addition, Conga enables you to define user accounts and then restrict those users to accessing storage on specific storage systems. Adding a new storage system to Conga just requires that you enter the storage system name and password as the luci admin user. Once you do this, you can restrict access to storage systems with this form:

Fig.9. User access.

The result is that when user “jsmith” logs into luci, she will only be able to access storage system “tng3-3.” You as the admin user can view all the storage nodes. Once a storage system is being managed by luci, you can manage disk partitions, logical volume goups, and RAID filesystems. You can also retrieve system logs through the web interface. For example:

Fig.10. Viewing system logs.

Note that when a cluster is added to a luci server, all the nodes in that cluster are added as storage systems. If you don’t want any or all of these cluster nodes to also be managed as storage systems, then you can remove individual storage systems from luci, while the cluster management capability is maintained. Also note that storage system user accounts and passwords used in Conga are in addition to any existing system or cluster user accounts and password. Defining user accounts and passwords in Conga will not affect existing system accounts or passwords.

What’s next?

This article is intended to give a brief introduction to the features provided by Conga. In subsequent articles, we’ll examine specific topics in greater detail. These topics will include:

  • Cluster creation and management with Conga
  • Storage system management with Conga
  • Conga administrative and trouble-shooting tips

The topics summarized in the article are covered in greater detail in the excellent user guide and online help provided with Conga. If this article has whet your appetite for learning more about Conga, then the next step you should take is to get a copy of and try it out!

References

Acknowledgments

The credit for the creation and development of Conga goes the team who first brought it to life (Jim Parsons–Designer and Architect for Conga, Kevin Anderson–Director of Storage and Cluster Development, Rob Kenna–Product Manager, Ryan McCabe, and Stan Kupcevic) and the community that is making contributions toward its future. The information in this article is in large part adapted from the extensive Conga online help that they created. Without their help, encouragement, and insightful review comments, this article could not have been written.

About the author

Len DiMaggio is a QE Engineer at Red Hat in Westford, MA (USA) and has published articles on software engineering in Dr. Dobbs Journal, Software Development Magazine, IBM Developerworks, STQE, and other journals. He is a frequent contributor to Red Hat Magazine.

阅读(478) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~