HPC Industry Dynamics
New areas of innovation are being driven by technology advancements across the HPC landscape.
Twenty years ago, a high performance computer was a relatively self-contained device: one vendor might have designed the server, the processor, and the operating environment, providing a homogenous application environment. The 1990s and 2000s brought waves of commoditization that democratized the industry, and today the technology landscape is wide open, driven by independent vendors on many fronts.
In today's environment, critical HPC innovations come not only from server, blade, and cluster vendors, but from designers of microprocessors, interconnect fabrics, accelerators, storage devices, file systems, operating systems, developer tools, on-demand services, and a wide range of application software. InterSect360 Research covers the HPC industry by incorporating all of these areas in our total market models and in-depth analysis.
Some of the industry trends we're watching include:
- New application areas: The accessibility of HPC technologies has the ability to fuel the adoption of modeling and simulation techniques by more organizations. These adopters can either be in traditional science and engineering realms - such as automotive component suppliers, consumer product manufacturers, or independent science labs - or in business-oriented, data-driven applications - such as fraud detection, real-time online environments, or buying trend analysis. Technologies like multi-core processors, integrated desktop form factors, and the availability of standard Windows or Linux environments make it easier for organizations to adopt HPC technologies for the first time.
- New usage and delivery models: HPC acquisition used to follow a simple formula: buy the biggest, most powerful computer you can afford and run it as hard and as long as you can. Today's buyers face constraints beyond their capital budgets, with limitations on datacenter space, power consumption, and administration resources. HPC cycles and services can now be delivered in ways that optimize for these barriers, with self-contained container systems, on-demand licenses, and cloud-enabled utility models such as cloud-bursting and software as a service.
- New productivity metrics: Peak performance per dollar is a simple metric, but it's often not a useful one. As multi-core processors and network switch standards drive clusters comfortably through teraflops to petaflops levels and data sizes continue to expand exponentially, many organizations are left wondering how they can be sure they are using all that capability productively. Technologies like intelligent switching fabrics, parallel file systems, and integrated development environments often hide behind the curtain but can make a huge difference in delivering efficient performance at scale.
In a market driven by innovation, the only constant is change. There is no practical end to the scope and complexity of problems that can be addressed by HPC. Intersect360 Research understands the increasing complexity in designing and selecting HPC systems, as we continually monitor technology trends and how they affect the industry. Our demand-side research puts today's innovations in perspective and provides a headlights view of where the market will be tomorrow.
©2011 Intersect 360 Inc. All rights reserved.
|USCC Testimony: China's Pursuit of Next Frontier Tech (Mar 2017)|
|The Top of All Things in HPC 2016 Site Census Snapshot Analysis (Jan 2017)|
|Worldwide High Performance Computing Vertical Market 2015–2020 Forecast (Dec 2016)|