As published in InsideHPC

General Business Supercomputing<br />
Google’s Acquisition of PeakStream Provides Evidence of the New HPC

It’s an increasingly common pattern in HPC. A small company bursts onto the scene with an innovative technology that improves productivity for a category
of applications. Seeking to lock out competitors by tying up the boost in price/performance for itself, an HPC behemoth wades in and snaps it up
in one bite. The democratization of technology has fueled this trend, and Tabor Research expects it to continue to accelerate. Recently it became
PeakStream’s turn.

The upstart stream computing company had turned a few heads for its processors’ ability to streamline applications that benefit from data parallel
programming strategies. Such strategies can target current advances in multicore processor architectures found on general purpose CPUs, GPUs, or
gaming processors. An acquisition was predictable, but few people foresaw who the acquirer would be: Google.

Tabor Research Analysis: Why Google + PeakStream Makes Sense

Much of Google’s ultra-scale business computing application portfolio fits the workflow profile that can be optimized by stream computing. Furthermore
Google faces extreme ongoing pressure from the likes of Yahoo!, MSN, and others.

By acquiring PeakStream, Google gains control over aspects of the development of stream computing architectures and algorithms. Google therefore adds
to its bag of non-standard tricks it can use to scale higher or perform faster. This acquisition is a clear indicator that Google is going beyond
standard business computing. Google is investing in supercomputers.

Does Ultra-Scale Business Computing = Supercomputing?

Tabor Research and HPCwire have already published several documents on our view of High Productivity Computing. We are expanding the traditional definition
along two dimensions. The supply-side, technology dimension is extended to include factors beyond the computation engines, such as the software
stack, storage systems, interconnects, facilities and so on. The demand-side, application dimension is extended to include edge-of-the-envelope
business applications whose profiles mirror traditional HPC workflows. Thus Google enters into the equation.

For Google, web serving and search algorithms go beyond any jarred solution that can be easily acquired and maintained. The scalability and optimization
implicit in Google’s strategy and scope require the use of technology above and beyond standard enterprise solutions, either in architecture, software,
or system management. In a nutshell, Google needs and employs supercomputers.

HPC Market Intelligence Going Forward

Tabor Research is tracking two different types of HPC usage:

  • Traditional scientific and engineering HPC applications, regardless of the size of the server; and
  • Supercomputers, regardless of the application.
  • This methodology affords us the best ability to track market and technology dynamics.

It is important to note that general business computing, the use of servers for tasks such as email or web serving, payroll databases, enterprise resource
planning (ERP), and the like, does not fit this definition of HPC, and in general, Tabor Research will not track it. Thus web services are not
an HPC application per se. Whether or not we count sites in our view of High Productivity Computing becomes a question of scale, computational
complexity, or the required use of specialized technology.

Our first InterSect360™ market advisory service reports will contain specific definitions for what constitutes a supercomputer. It is not a simple
matter of price bands or core counts, but rather an examination of whether special steps need to be taken to accommodate the scalability requirements.
Tabor Research will track “ultra-scale business computing” as one of the vertical segments in the supercomputing echelon of the market.

Posted in