Cluster Lifecycle Management: Part 2 of 7

Matching Needs to Cluster Design

By Deepak Khosla

In the first Cluster Lifecycle Management column, I discussed the importance of performing a Needs Assessment to ensure that an organization implementing its first HPC cluster designs a system that fulfills the requirements of its users and ultimately provides a positive ROI for the enterprise. It’s important to keep in mind there are no one-size-fits-all solutions in HPC implementation. Your Cluster Design – which includes critical elements such as selections of nodes, interconnects and operating system – will be unique based on your particular needs.

As explained in the first column, much of the Needs Assessment must focus on the software applications that will be run on the cluster because their operational parameters dictate selection of many critical system components, compute nodes being first on the list. Each application has an optimal memory bandwidth, expressed in terms of gigabits per second (Gbps), which it needs to run effectively for each CPU in the node. The key is choosing / designing a node that can most effectively support the Gbps rating of the application across the number of CPUs within the node

For the typical organization, even one implementing a small HPC cluster, there are often multiple applications that will be run on the system. In this situation, the Gbps rating of the most memory bandwidth-intensive application should be taken into account when deciding on compute nodes. So if the most demanding application requires 64 Gbps of memory bandwidth, a node with that throughput will accommodate not just the one application but also all those requiring less memory bandwidth. The size of memory in the node is the other important factor. The required memory per CPU or overall for each application will dictate how much memory needs to be designed per node.

There are, however, implementations in which one application is a memory hog compared to all of the others combined. In that case, it is often less expensive to install one or more large memory nodes (often called ‘fat nodes’) for that particular application, and ‘thin nodes’ to handle the other less demanding applications.

The number of nodes installed in the cluster will depend on many variables, but one of the most important is a business consideration. Multiple nodes accelerate the speed with which the cluster performs its processing and produces the answers desired by the organization. Driven by business requirements, some organizations may need their applications to generate answers in two hours, while eight hours is sufficient for others. Multiple nodes provide results faster and will be worth the extra investment by some organizations.

Another important element in cluster design is the need for local disk space on the node. This is dependent on data input/output (I/O) requirements of the applications that will run on the system. Each application is rated for a certain frequency and volume of data reads and writes. Some applications will require the local disk to provide temporary ‘scratch’ space to hold data generated during processing. If scratch is needed, the I/O bandwidth of the application will dictate how many drives to put in the local node.

Reading and writing data to and from local disks takes time. For some applications, the speed with which they can read/write data can be more important than the volume of data involved. These applications are considered ‘latency sensitive’ (needing low latency) with respect to disk I/O.

Bandwidth and latency both play roles in selection of disk drives for the node and can significantly impact the overall cost of the implementation. Generally speaking, applications with high I/O and low latency ratings require SSD drives, which are the most expensive. High I/O and medium latency applications can get by with SAS drives, middle of the road in price. If latency is not an issue for the application, SATA drives are sufficient and also the least costly.

In addition to local disks, the cluster will need to have access to data on some central storage to facilitate getting input data and storing output data which needs to be shared among all nodes, as well as with external users. These external storage devices will move data to and from the nodes through an I/O-interconnect. Typically 1G or 10G connection per node may be sufficient for accessing shared storage. The reason is that since this is shared storage access, the throughput of the network or the storage itself is usually the limiting factor. For high throughput, low latency I/O needs, it is best to use local storage.

Another key implementation factor dictated by the applications is the need for an application-interconnect that allows nodes to communicate with each other. Some applications require low-latency communication among the nodes in the cluster, and some do not. For those that require latency in the single digit microseconds or throughput over 10Gbps, for example, Infiniband (IB) is the best fit today.

At the other extreme, if the application only requires high volume data transfers at the beginning and end of the jobs, and has very light inter-node communication needs, then a single 1G or 10G interconnect could handle both I/O and application traffic. If Ethernet is determined to be sufficient, then even if the need is 1G today, it is still recommended to specify systems with the 10G built-in port to handle future growth as it is becoming the default configuration.

The final decision in the cluster design is the operating environment. Again, the applications drive this selection. Certain applications run most efficiently in specific operating environments, and this requirement is well documented for each application. For an organization running multiple applications on its cluster, the challenge is finding one operating environment that satisfies the specifications of all applications. This is usually the least expensive option.

Often, however, multiple applications will require more than one operating environment. One option at this point is to install separate sets of nodes, each running the desired operating system. But there is another option – set up the nodes to run in multiple operating environments by establishing a provisioning system to dynamically switch from one environment to the other when particular applications are running. While complicated, this provides flexibility to match the resources to the workload.

In the next column, I will address the steps to take in using your Cluster Design to find the vendor that can supply you with the right system.

left arrowBack to Part 1                                                                               Continue to Part 3right arrow




The entire seven-part series is now available as a single document. If you would like a copy, please complete the form below. All fields are required.

Your Name

Your Email

Your Phone Number

How did you hear about X-ISS?

X-ISS considers your e-mail address and any personal information you provide to be private and this information will be kept strictly confidential. You may receive newsletter and marketing mailings from us periodically. An unsubscribe link will always be included on any mass mailings. View our Privacy Policy.