The difference between blades and clusters, or rather between blade-based distributed-memory systems and rack-mounted distributed-memory systems, has led
from time to time to the following question: Are the two types of systems variations of a common type or two separate types? (Or more simply: Are blades
clusters?)

Table 1 compares data on blade-based systems and nodes with clusters and rack-mounted nodes (a.k.a. cluster nodes). Data for this table was generated from
the Technology Views module of the InterSect360TM market advisory service. This database is currently based on specifications for 153 server products
and 177 nodes which are actively sold into the HPC market by 14 suppliers. It contains technical data at both the system level and the node level.
This database will be updated periodically. More detailed information about server technology currently sold to HPC market is available in the Tabor
Research report: InterSect360TM Market Advisory Service: Technology Views, “Technology Prevalence in the HPC Market,” September 2007.

There are about three times as many cluster nodes and systems as there are blade nodes and systems in the sample set. The table shows similarities centered
on the use of standards-based components (such as x86-64 processors and standard interconnects) and differences center on the greater capacity afforded
by rack-mounted nodes in such areas as sockets per node, configurable disk drives, memory, and I/O ports.

Historically, blades were designed to maximize compute density, i.e., cycles per cubic foot or meter. This involved using large numbers of small form factor
nodes that drew less power and were easier to cool. In this case, the design emphasis was on minimal node configurations needed to meet specific application
requirements. (Most early blade deployments focused on web services applications.) In contrast rack-mounted nodes evolved from PC server or workstation
clusters for which the original racks were essentially industrial shelving used to increase the floor space for tower-type units. Over time more standard
and space-efficient packaging was developed, but the rack-mounted node kept the features and functionality of the original PC servers.

Table 1: Attribute Comparison of Blades and Clusters

Attribute Blade Cluster

(Rack)
Rack/Blade
Number of servers in study 27 74 2.7
Number of nodes in study 33 105 3.2
% Infiniband Interconnect 26% 9% 0.3
% Ethernet Interconnect 62% 59% 0.9
% Myrinet Interconnect 9% 17% 1.9
Average min. cores per node 2.6 2.7 1.0
Average max. cores per node 5.4 7.5 1.4
Proportion based on x86-64 81% 80% 1.0
Average sockets per node 2.0 3.7 1.9
Average disk drives per node 1.9 3.5 1.8
Average max. memory per node (GB) 30.4 68.0 2.2
Absolute max. memory 80 256 3.2
Average I/O ports per node 6.0 37.7 6.3

Overall, blade and cluster nodes make use of similar standard technology; however, differences in form factor allow rack-mounted cluster nodes to be configured
to provide greater processor, memory, and I/O capacity. There is, however, another major difference between blade-based systems and clusters that is
not shown in Table 19, which is the addition of value-added technology in the supporting physical architecture of blade-based systems. These systems
provide a number of features designed to simplify system integration, increase system reliability, and decrease environmental costs; however, these
features are added at the cost of “de-standardizing” the product.

Instead of attempting to compete against standard components, blade system architectures have concentrated in two areas:

  • Power- and space-efficient node design – Although a blade uses standard processors, memory, and operating systems, the motherboard
    and overall packaging are highly architected to meet power, cooling, and density requirements. Along the way a number of design trade-offs must
    be made, which tends to limit the scalability within an individual node.
  • Integrated systems architecture – We believe the most interesting part of a blade system is the chassis or enclosure that is used
    to house, support, and unify the blades into a system. Chassis generally provide:

    • Mid-planes – Paths (i.e. wires) for node-to-node communications and for external network or I/O device communications. Mid-planes
      are generally “passive” in that they do not implement specific communications protocols.
    • Network switch support – The chassis supports one or more types of network nodes (e.g., Ethernet, Infiniband, Myrinet) that attach
      to the mid-plane and provide the active component to the interconnect network.
    • Power subsystems – Power supplies (usually redundant and hot swappable) and the wiring between these supplies and nodes.
    • Cooling subsystems – Cooling fans (usually redundant and hot swappable), designed for efficient air flow within the enclosure.

Getting back to our original question: Are blades clusters? Our analysis suggests they are not. However, we do not come to this conclusion based on the
results presented in Table 1. First, these differences are the result in large part from original design goals for clusters and blade-based systems.
Second, there is no inherent reason for the differences to continue over time. There is nothing that would prevent a system architect from creating
a blade as powerful as most (if not all) rack-mounted nodes. It would just be a very large blade.

What makes blade-based systems separate entities from clusters is the amount of engineering effort and thus intellectual property that goes into their
design and manufacturing. Blade systems are designed as differentiated products that can compete not only with each other but also with standards-based
clusters. By differentiating products through added value (as opposed to celebrity endorsements, cool exterior design, neat giveaways, etc.), suppliers
also de-standardize those products. In this case added value equals intellectual property equals proprietary system. An IBM blade will not fit into
a HP enclosure, and there are no multi-vendor standards for commercial-off-the-shelf (COTS) blade systems.

Posted in