Controlling computer clusters under Linux

Oct. 1, 2002
Paying for expensive and proprietary software and hardware only to end up tied to an inflexible computer platform is a trend of the past. Today's rapidly changing oil and gas industry is shifting toward open source software platforms using commercial off-the-shelf computer components.

Jason Lowry
Linux NetworX

Paying for expensive and proprietary software and hardware only to end up tied to an inflexible computer platform is a trend of the past. Today's rapidly changing oil and gas industry is shifting toward open source software platforms using commercial off-the-shelf computer components.

By using Linux as the operating system and hardware based on standard x86 architectures (Intel-based), Linux clustering is the culmination of both of these concepts. Clustering leverages the power of the Linux operating system, while harnessing the power of low-cost commercial-off-the-shelf (COTS) components to deliver a hardware/software package that is powerful, scalable, flexible, and reliable.

Clustering leverages the power of the Linux operating system while harnessing the power of low-cost COTS components.

Click here to enlarge image

Clustering is a method of linking multiple computers, or nodes, together to form a unified, powerful, and reliable system. But clustering is still a loosely used term where the functionality can vary greatly. Linux clusters are being used for seismic imaging, data collecting, research, and data warehousing and are becoming a widely accepted standard.

Categories

Three main categories of Linux clusters exist, including Beowulf or supercomputer clusters, the data cluster, and the server/render farm. Despite the cost-savings, questions remain about the manageability of Linux clusters.

A common myth is that PhD-level knowledge is required to adopt the technology. At one time this was true.

Today, vendors provide services such as integration, installation, system optimization, and training. New cluster management tools also help administrators control complex systems. The barriers have been significantly lowered.

Administrator challenges

Administrators have several issues with managing and maintaining a Linux cluster. Cluster administrators need not only to know where the nodes are, but also who they are with, what they are doing, how hard they are working, and where the network bottlenecks are.

The challenge to the administrator is finding the best available tools to help do the job as painlessly as possible. Cluster administrators need empowering tools to help them essentially become omniscient and omnipotent over their systems. Items to consider include:

  • Cluster efficiency
  • Hardware failures
  • Software upgrades
  • Remote access
  • Cloning and storage management
  • System consistency.

Control

With the inherent complexities of cluster systems, a variety of challenging issues face administrators. For example, how do you install and configure 1,000 headless servers without a CD-ROM or floppy drive? This is a valid concern because many clustering vendors have lowered costs by eliminating non-essential hardware components.

Network booting is the solution to many of these problems – including installation of the operating system. Network-based booting methods are generally based on dynamic host configuration protocol (DHCP) or bootstrap protocol and can be started via read-only memory on the network card.

In network boot configurations, a master host is responsible for listening and granting DHCP requests to newly powered-on nodes and sending the node a micro-kernel, which is served via trivial file transfer protocol. After the node has a kernel and enabled a network connection, several methods of installing or updating the node are available.

Linux NetworX's ICE Box stores a history of characters in cache so the history of node errors can be reviewed directly or logged to a remote location.

Click here to enlarge image

Software solutions exist that have the ability to install over a network and are available for Linux clustering. Some of these include the Turbolinux Cluster Cockpit, System Imager, and the Linux NetworX's ClusterWorX software.

Many clusters consist of either heterogeneous hardware or have nodes assigned to different tasks, and as a result it is important to select a software solution that allows management of multiple node images and groups. These packages also allow remote partitioning of nodes and configuration of diskless nodes booting over network file services.

Monitoring

Once the nodes are installed, how do you monitor and manage headless nodes? Many command line tools exist, but most are not sufficient for monitoring a large cluster. Monitoring and managing nodes is preferably accomplished through a graphic user interface, allowing the user to easily monitor the basic status of each node. Most available interfaces show only basic system health information, such as its "heartbeat."

Some more advanced interfaces display system statistics with features such as CPU and memory usage, network bandwidth, and disk input/output (I/O). Getting this information usually requires a daemon or agent to run on each of the nodes and report these different values to the host machine.

While most open source tools are currently immature, bWatch, a software tool by Scyld and Scalable Cluster Environ-ment (SCE), have made some progress.

Another consideration is cluster resource management. Several good queuing systems are available for cluster systems, notably Plat-form Computing's LSF and Veridian software's OpenPBS.

Ideally, cluster-monitoring software would allow administrators to add their own monitoring and management features to the original program. Some server management solutions, such as ClusterWorX, allow for this.

Event management

Automatic system administration is part of the Unix administrator's daily life. Automation allows administrators to take a repetitive task and have it handled by the software. This automation becomes particularly attractive on cluster systems with several hundred machines to manage. Time-based automation, such as Cron, provides invaluable functionality for basic administration; however, time-based management fails to monitor the system. Event-based management is crucial for monitoring the system while the administrator is away.

Event management takes system administration to the next level. An event monitors a property, or group of properties like disk I/O or user CPU, and sets a threshold on each property. If the threshold is exceeded, a script is run and an action is taken.

Many administrators have developed these types of tools internally; however, they are generally fine-tuned to a specific purpose. Few open source tools have made progress in event management.

Node access

Serial access to nodes is popular and well suited to high-availability clusters, especially those in a collocation facility or off-site location because they offer reliable back-up access to the cluster – even in the event of node network failures. High performance cluster users have much to gain through serial access as well.

The Linux kernel can redirect kernel messages to serial, making it possible to watch the node boot from a remote location. In fact, some motherboards allow the serial line to be activated as early as the initial BIOS messages. Also, several new server boards allow serial configuration of BIOS settings or BIOS access through a serial port. Nice features, but only if you have a serial switch to take advantage of them.

The Linux kernel, allowing access to machines early in the boot process before the network is started, starts the serial console. This can help diagnose or solve problems before network access can fix things. For example, running Red Hat's interactive start-up to monitor start-up problems, running in single-user mode, and manual file system checks wouldn't be possible remotely without a serial console.

All console errors with Linux can be redirected to the serial port.; therefore any node errors normally appearing in the xconsole, will also be sent to the serial port. Unless someone or something is watching the errors, however, they are sent to the serial port and disappear.

Some advanced serial console switches have the ability to store console history in a per node cache. Two of these are the Lightwave Communications' ConsoleServer 3200 and the Linux NetworX's ICE Box. ICE Box stores a history of characters in cache so if errors occur, the history can be reviewed. ICE Box also allows serial console errors to be logged to a remote location.

Good serial console switches offer both serial access and network access to the serial switch. Port redirection to the nodes allows the administrator to go directly to the node while the switch handles the routing.

Unlike keyboard, video, and mouse switches (KVM), serial console switches provide administrators a console quickly and efficiently to the node, without the overhead of running an X-session on the node. Unlike KVM solutions, serial solutions are scalable.

Conclusion

The right cluster management tools allow organizations to dedicate resources to core busi-ness or research instead of spending time integrating, configuring, and managing complex Linux cluster systems. Organizations can receive the most return on their Linux clustering investment by training and empowering administrators. This ultimately leads to peace of mind for the administrator and a lower total cost-of-ownership for the organization.

Author

Jason Lowry can be reached at Tel: 801 562 1010 ext. 221, or email: [email protected] for more information.