As published in HPCwire

The chronology of high performance computing can be divided into “ages” based on the predominant systems architectures for the period. Starting in
the late 1970s vector processors dominated HPC. By the end of the next decade massively parallel processors were able to make a play for market
leader. For the last half of the 1990s, RISC based SMPs were the leading technology. And finally, clustered x86 based servers captured market priority
in the early part of this century.

This architectural path was dictated by the technical and economic effect of Moore’s Law. Specifically, the doubling of processor clock speed every
18 to 24 months meant that without doing anything, applications also roughly doubled in speed at the same rate. One effect of this “free ride”
was to drive companies attempting to create new HPC architectures from the market. Development cycles for new technology simply could not outpace
Moore’s Law-driven gains in commodity technology, and product development costs for specialized systems could not compete against products sold
to volume markets.

The more general-purpose systems were admittedly not the best architectures for HPC users’ problems. However commodity component based computers were
inexpensive, could be racked and stacked, and were continually getting faster. In addition, users could attempt to parallelize their applications
across multiple compute nodes to get additional speed ups. In a recent Intersect360 study, users reported a wide range of scalable applications,
with some using over 10,000 cores, but with the median number of cores used by a typical HPC application of only 36 cores.

In the mid 2000s, Moore’s Law went through a major course correction. While the number of transistors on a chip continued to double on schedule, the
ability to increase clock speed hit a practical barrier — “the power wall.” The exponential increase in power required to increase processor cycle
times hit practical cost and design limits. The power wall led to clock speeds stabilizing at roughly 3GHz and multiple processor cores being placed
on a single chip with core counts now ranging from 2 to 16. This ended the free ride for HPC users based on ever faster single-core processors
and is forcing them to rewrite applications for parallelism.

In addition to the power wall, the scale out strategy of adding capacity by simply racking and stacking more compute server nodes caused some users
to hit other walls, specifically the computer room wall (or “wall wall”) where facilities issues became a major problem. These include physical
space, structural support for high density configurations, cooling, and getting enough electricity into the building.

The market is currently looking to a combination of four strategies to increase the performance of HPC systems and applications: parallel applications
development; adding accelerators to standard commodity compute nodes; developing new purpose-built systems; and waiting for a technology breakthrough.

Parallelism is like the “little girl with the curl,” when parallelism is good it
is very, very good, and when it is bad it is horrid. Very good parallel applications (aka embarrassingly parallel) fall into such categories as:
signal processing, Monte Carlo analysis, image rendering, and the TOP500 benchmark. The success of these areas can obscure the difficulty in developing
parallel applications in other areas. Embarrassingly parallel applications have a few characteristics in common:

  • The problem can be broken up into a large number of sub-problems.
  • These sub-problem are independent of one another, that is they can be solved in any order and without requiring any data transfer to or from other
    sub-problems,
  • The sub-problems are small enough to be effectively solved on whatever the compute node du jour might be.

When these constraints break down, the programming problem first becomes interesting, then challenging, then maddening, then virtually impossible.
The programmer must manage ever more complex data traffic patterns between sub-problems, plus control the order of operations of various tasks,
plus attempt to find ways to break larger sub-problems into sub-sub-problems, and so on. If this were easy it would have been done long ago.

Adding accelerators to standard computer architectures is a technique that has been used throughout the history of computer architecture development.
Current HPC markets are experimenting with graphics processing units (GPUs) and to a lesser extent field programmable gate arrays (FPGAs).

GPUs have long been a standard component in desktop computers. GPUs are of interest for several reasons: they are inexpensive commodity components,
they have fast independent memories, and they provide significant parallel computational power.

FPGAs are standard devices long in use within the electronics industry for quickly developing and fielding specialty chips that are often replaced
in products by standard ASICs over time. FPGAs allow HPC users to essentially customize the computer to the requirements of their applications.
In addition they should benefit from Moore’s Law advancements over time.

Challenges for accelerator-based systems stem from a single program being run over two different processing devices, one a general-purpose processor
with limited speed, and the other an accelerator with high processing speed but with limited overall functionality. Challenges fall into three
major areas:

  • Programming — Computers can be built to arbitrarily high levels of complexity, however the average complexity of computer programmers
    remains a constant. Accelerators add two levels of complexity for applications development, first writing a single program that is divided
    between two different processor types, and second, writing a program that can take advantage of the specific characteristics of the accelerator.
  • Control and communications — Performance gains from accelerations can be diminished or lost from compute overhead generated from
    setting up the problem on the accelerator, moving data between the standard processor and the accelerator, and coordinating the operations
    of both compute units.
  • Data management — Programming complexity is increased and performance is reduced in cases where the standard processor and accelerator
    use separate independent memories. Issues for managing data across multiple processors range from determining proper data decomposition, to
    efficiently moving data in and out of the proper memories, to stalling processes while waiting on data from another memory, to debugging programs
    where it is unclear which processor has last modified a data item.

Many of these issues are associated with parallel computing in general, however they are still significant for accelerator-based operations, and the
close coupling between the processor and the accelerator may require programmers to have a deep understanding of the behavior of the physical hardware
components.

Purpose-built systems are systems that are designed to meet the requirements of HPC workflows. (These systems were initially called supercomputers.)
In today’s market, new HPC architectures still make use of commodity components such as processor chips, memory chips/DIMMS, accelerators, I/O
ports, and so on. However they introduce novel technologies in such areas as:

  • Memory subsystems — Arguably the most important part of any HPC computer is the memory system. HPC applications tend to stream
    a few large data sets from storage through memory, into processors, and back again for a normal workflow. In addition, such requirements as
    spare matrix calculations lead to requirements for fast access to non-contiguous data elements. The speed at which the data can be moved is
    the determining factor in the ultimate performance in a large portion, if not the majority, of HPC applications.
  • Parallel system interconnects — Parallel computer essentially address the memory bandwidth problem by creating a logically two
    dimension memory structure, one dimension is within nodes. i.e., between a nodes local memory and local processors. Total bandwidth in this
    case is the sum off all node bandwidths and is very high. The second dimension is the node to node interconnect, which is essentially a specialized
    local area network that is significantly slower in both bandwidth and latency measures than local node memories. As applications become less
    embarrassingly parallel the communications over the interconnect increases, and the interconnect performance tends to become the limiting factor
    in overall applications performance.
  • Packaging — The speed of computer components. i.e., processors and memories can be increased by reducing the temperature at which
    they run. In addition, parallel computing latency issues can be addressed by simply packing nodes closer together, which requires both fitting
    more wires into a smaller space, and removing high amounts of heat from relatively small volumes.

Developing specialized HPC architectures has, up until recently, been limited by the effects of Moore’s Law, which has shortened product cycle times
for standard products, and limited market opportunities for specialized systems. Those HPC architecture efforts that have gone forward have generally
received support from government and/or large corporation R&D funds.

Waiting for a technology breakthrough (or the “then a miracle happens” strategy) is always an alternative; it is also the path of least resistance,
and one step short of despair. Today we are looking at such technologies as optical computing, quantum entanglement communications, and quantum
computers for potential future breakthroughs.

The issue with relying on future technologies is there is no way to tell first, if a technology concept can be turned into viable a product — there
is many a slip between the lab and loading dock. Second, even if it can be shown that a concept can be productized, it is virtually impossible
to predict when the product will actually reach the market. Even products based on well understood production technologies can badly overrun schedules,
sometimes bringing to grief those vendors and users who bet on new products.

The above arguments suggests that the next age of high performance computing could be based on anything from reliance on clusters with speed boosts
add-ons, to a brave new computer based on technologies that may not have been heard of yet. (You can never go wrong with a forecast like that.)
That said, I am willing to lay odds on purpose-built computers becoming a major component, if not the defining technology of the HPC market within
the next five years, for two major reasons.

First, there is no “easy” technical solution. Single thread performance has plateaued; the usefulness of accelerators is dependent on both the parallelism
inherent to the application and the connectivity between the accelerator and the rest of the system; and parallelism, while an advantage where
it can be found, is not a panacea for computing performance.

Second, the economics of HPC system development have changed. Users cannot simply sit back and wait for a faster CPU, but must make significant investments
in either new software, or new architectures, or both. Staying with old economic models will lead to the computation tools defining the science,
where work will be restricted to those areas that will run well on off-the-shelf computers.

The HPC market is at a point where the business climate will support greater levels of innovation at the architectural level, which should lead to
new organizing principle for HPC systems. The goal here is to find new approaches that will effectively combine and optimize the various standard
components into systems that can continue to grow performance across a broad range of applications.

Of course we can always wait for a miracle to happen.

Posted in