The Coming Intersection of HPC and the Enterprise Data Center – Forbes blog by John Webster

By , Tuesday, February 20th 2018

Categories: Analyst Blogs

Tags: AI, artificial intelligence, blog, Data center, eneterprise, Forbes, high performance computing, HPC, Internet of Things, Iot, john, webster,

High Performance Computing (HPC) traditionally exists as a separate and distinct discipline from enterprise data center computing. Both use the same basic components—servers, networks, storage arrays—but are optimized for different types of applications. Those within the data center are largely transaction-oriented while HPC applications crunch numbers and high volumes of data. However, an intersection is emerging, driven by more recently by business-oriented analytics that now fall under the general category of Artificial intelligence (AI).

Data-driven, customer-facing online services are advancing rapidly in many industries, including financial services (online trading, online banking), healthcare (patient portals, electronic health records), and travel (booking services, travel recommendations). The explosive, global growth of SaaS and online services is leading to major changes in enterprise infrastructure, with new application development methodologies, new database solutions, new infrastructure hardware and software technologies, and new datacenter management paradigms. This growth will only accelerate as emerging Internet of Things (IoT)-enabled technologies like connected health, smart industry, and smart city solutions come online in the form of as-a-service businesses.

We can see how the Supple company proves to us that business is now about digital transformation. In the minds of many IT executives, this typically means delivering cloud-like business agility to its user groups—transform, digitize, become more agile. And it is often the case that separate, distinctly new cloud computing environments are stood-up alongside traditional IT to accomplish this. Transformational IT can now benefit from a shot of HPC.

HPC paradigms were born from the need to apply sophisticated analytics to large volumes of data gathered from multiple sources. Sound familiar? The Big Data way to say the same thing was “Volume, Variety, Velocity.” With the advent of cloud technologies, HPC applications have leveraged storage and processing delivered from shared, multi-tenant infrastructure. Many of the same challenges addressed by HPC practitioners are now faced by modern enterprise application developers.

As enterprise cloud infrastructures continue to grow in scale while delivering increasingly sophisticated analytics, we will see a move toward new architectures that closely resemble those employed by modern HPC applications. Characteristics of new cloud computing architectures include independent scaling compute and storage resources, continued advancement of commodity hardware platforms, and software-defined datacenter technologies—all of which can benefit from an infusion of HPC technologies. These are now coming from the traditional HPC vendors—HPEIBM and Intelwith its 3D-XPoint for example—as well as some new names like NVIDIA, the current leader in GPU cards for the AI market.

To extract better economic value from their data, enterprises can now more fully enable machine learning and deep neural networks by integrating HPC technologies. They can merge the performance advantages of HPC with AI applications running on commodity hardware platforms. Instead of reinventing the wheel, the HPC and Big Data compute-intensive paradigms are now coming together to provide organizations with the best of both worlds. HPC is advancing into the enterprise data center and it’s been a long time coming.

 

Forgot your password? Reset it here.