SuperComputing 18 celebrated its 30th Year in Dallas Texas with a crowd of over 12,000 attendees. In addition to seeing the future, including IBM’s Quantum Computing, scattered through the show were some of the original HPC gear including an old Cray and Sun Microsystems boxes.
We took away a few major themes, other than the usual my super computer is faster than yours.
HPC meets AI at SuperComputing. Perhaps we should have a new name for the High-End Computing+Big Data+Machine Learning+Deep Learning, as they seem to merge together. For instance, CERN gave examples of how they are using ML to detect anomalies in their modeling which are then examined by scientists. This enables them to more efficiently examine a 100-year weather simulation or correct inefficiencies and inaccuracies in Monte Carlo simulations. Nvidia is clearly a winner in this circle as 127 of the Top500 Supercomputer use GPUs from NVIDIA, double from two years ago. Still the overall view is that AI is still new as much of the effort is still involved in training and experimentation.
Power and Cooling are top of mind. There was a constant discussion of power and cooling, more so than in prior years. It is no longer a question of if the environment will be water or liquid cooled, but rather when. This not just because of the increasing use of GPU’s, but also the increasing reliance on next generation compute platforms, such as Intel’s Cascade Lake or one of the many new silicon ventures. The Texas Advanced Computing Center (TACC) located at University of Texas in Austin, expects their new Frontera system ) to pull 6 MegaWatts at peak load. Others showed off racks that were only half full due to the limits of air cooling. As would be expected, investments are being made in new designs and technology approaches. IBM, HPE and Lenovo have their unique designs as do others. Lenovo built its own cooling system called “Neptune” that features both passive and active cooling. Dell EMC highlighted their relationship with CoolIT Systems for direct liquid cooling. All are pursuing fan-less solutions which will reduce overall power requirements by 15%-30%.
While containers are just entering the purview of Enterprise IT as core to application portability across clouds, this is not true for the HPC / AI crowd. Containers are a fait accompli and the only direction for the current generation of super computer services. However, instead of Docker, the SuperComputing crowd uses Singularity as it is more efficient platform with MPI workloads and also because of the security risk with Docker. This is also tied into making supercomputing / AI easier to use. Research Centers are increasingly competing for business in the enterprise, plus the new class of data scientists requires their environments to be much simpler to access and use. Thus the university labs are investing in web interfaces and API’s to provide easier access and scheduling. For instance, TACC has 32 people working improving their interface, with similar examples given from other universities.
Last year, we were taken aback by the number of data storage providers at the show. This year, data storage was still a topic, but we have moved on to some new issues. SuperComputing is the domain of cutting-edge data centers, and while most will still be dominated by PhD’s, the SuperComputing community understands that there are now a collection of a new kids on the block – data scientists and enterprise IT users. They plan to cater to this crowd and provide their share of services.
Download the free report now!