As published in HPCWire

Traditional HPC and Edge HPC — The Same Only Different

Tabor Research is in the midst of conducting in-depth end-user interviews with organizations running or considering Edge HPC applications (see http://www.taborresearch.com/edgemarket.html
for Edge HPC definition). As we have completed the initial interviews several similarities and difference between the two branches of high productivity
computing have become apparent.

Similarities include:

  • Bleeding edge computing — Interviewees often sounded like old time supercomputer users, noting that standard solutions for their problems were
    not available, and that they had to be developed in-house. Also that their organizations were constantly refreshing technology, and were among
    the first to adopt new product features.
  • Latency is everything — One of the biggest issues is interconnect and network performance, particularly communications latency. This is one of
    the oldest and most persistent issues in HPC, as each improvement in component performance places new pressures on interconnects and networks
    to keep systems in balance. “You can buy bandwidth, but you can’t bribe God for latency.” – Ancient Cybernetic Proverb
  • Pain of parallelism — It is difficult to find programmers who can write “decent multithreaded code.”
  • Environmental issues — Space, power, and cooling are major concerns. No real surprise here, however some interesting measures: throughput per
    KW, and performance per a “U”. Once again the shade of green in computing is the same as on a dollar bill.

Differences include:

  • Data scaling — Broadly speaking, traditional HPC requirements tend to be driven by problem complexity, with database size being scaled up to improve
    fidelity. In contrast, some classes of Edge HPC applications begin with solvable problems that grow to HPC scale as they are applied to larger
    and larger datasets. Such growth is often made possible by the reach of the Internet. For example, combining scheduling systems for all colleges
    and universities into a state-wide system, or expanding the number of simultaneous users in an on-line computer game.
  • I/O patterns — Traditional HPC applications tend to stream data from one or more flat files through the CPU and back to flat files. Edge HPC applications
    are more likely to access multiple databases and or real-time data feeds.
  • Real world interactions — Traditional scientific and engineering HPC attempts to understand the physical world or answer “what-if” questions about
    product designs. It basically plays an observer role in seeing how things work. Edge HPC tends to involve interactions with the environment
    in anywhere from real-time to operational time modes. Applications range from network security, where intrusions need to be identified as they
    are happening, to ongoing decision processes, where an initial decision effects later options.
  • Compute to cost connection — Historically, traditional HPC has suffered from the inability to identify the monetary benefits of research and development
    because the value of a discovery cannot be measured until after it is…discovered. Edge HPC dollar benefits are often very clear as the ability
    to do business may depend on the ability of the system to scale with demand. The “HPC as product” concept leads to requirements for high reliability
    as applications are run on a 24 x 7 basis.

One characteristic of the HPC industry analyst job is that technology, applications, and user genius combine to boggle the mind on a regular basis.
(I have never figured out if I should consider this a perk, or ask for hazardous duty pay.) This current excursion into the Edge market has so
far rated high on the boggle-o-meter producing surprises in such areas as: system scaling requirements, real world to model communications requirements,
and diversity of applications. Thus providing yet another similarity between Traditional and Edge High Productivity Computing.

Posted in