Clustering Technologies

Updated: Thu, 16 Apr 2015 by Rad

A cluster is the grouping of two or multiple physical servers that are perceived to the network as one network server. The servers in the cluster, called nodes, operate together as one network server to provide redundancy and load balancing to the corporate network by resuming operations of any failed server within the cluster.

In a computer system, a cluster is a group of servers and other resources that act like a single system and enable high availability and, in some cases, load balancing and parallel processing.

Clusters introduction

Clustering technologies
Clustering Technologies
Credit: Bruno Cordioli, flickr. Licensed under Creative Commons Attribution License.

Clustering technologies enable businesses achieve high availability and scalability for applications that are critically important to businesses. These applications include:

  • corporate databases
  • e-mail
  • Web-based services such as retail Web sites

Scale when you need to

By using appropriate clustering technologies and carefully implementing good design and operational practices (for example, configuration management and capacity management), you can scale your installation appropriately and ensure that your applications and services are available whenever customers and employees need them.

High Availability and Scalability

High availability is the ability to provide user access to a service or application for a high percentage of scheduled time by attempting to reduce unscheduled outages and mitigate the impact of scheduled downtime for particular servers.

Scalability is the ability to easily increase or decrease computing capacity. A cluster consists of two or more computers working together to provide a higher level of availability, scalability, or both than can be obtained by using a single computer.

Availability is increased in a cluster because a failure in one computer results in the workload being redistributed to another computer. Scalability tends to be increased, because in many situations it is easy to change the number of computers in the cluster.

Open Source legacy

Computer clusters first emerged in universities and research centres where this extra power was especially needed. Some of the things that characterise these organisations include tight budgets and people with computer expertise.

Thus, it's not surprising that cluster management software grew up primarily from open source Linux projects that cost almost nothing.

Distributed Computing

Clustering is a popular strategy for implementing parallel processing applications because it enables companies to leverage the investment already made in PCs and workstations. In addition, it's relatively easy to add new CPUs simply by adding a new PC to the network.

The Distributed Computing Environment ( DCE ) is a widely-used industry standard that supports this kind of distributed computing. On the Internet, third-party service provider s now offer some generalized services that fit into this model.

Open Source distributed systems

One of very interesting Open Source systems is The Berkeley Open Infrastructure for Network Computing (BOINC) - an open source middleware system for volunteer and grid computing. The main ideas is to use the idle time on your computer (Windows, Mac, Linux, or Android) to cure diseases, study global warming, discover pulsars, and do many other types of scientific research.

< back to glossary

Cluster technologies - from around the web

< back to glossary




External IT glossary resources.