Nvidia adds nine supercomputing containers to its GPU cloud
Signals the firm's increased push of GPU accelerated high-performance computing and AI
GPU giant Nvidia has added nine new GPU-charged supercomputing containers to its Nvidia GPU Cloud (NGC) service.
Announced at this week's International Supercomputing Conference in Frankfurt, the additional containers, including CHROMA, CANDLE, PGI, now make a total of 35 on the firm's platform, which has more than tripled the amount since its launch last year.
At the previous year's Supercomputing conference, Nvidia added eight containers including ones called NAMD, GROMACS, and ParaView.
The move signals the firm's push of GPU accelerated high-performance computing and AI.
"Over the past three years, containers have become a crucial tool in deploying applications on a shared cluster and speeding the work, especially for researchers and data scientists running AI workloads," Nvidia said in a statement.
"These containers make deploying deep learning frameworks — building blocks for designing, training and validating deep neural networks — faster and easier."
Installing frameworks is complicated and time consuming, and Nvidia said that containers simplify this process, so users can get access to the latest application versions with simple pull and run commands.
The complex deployment challenge also applies to HPC computing and visualisation applications.
"The container for PGI compilers available on NGC will help developers build HPC applications targeting multicore CPUs and NVIDIA Tesla GPUs," the company continued. "PGI compilers and tools enable development of performance-portable HPC applications using OpenACC, OpenMP and CUDA Fortran parallel programming."
"Users clearly see the value of NGC containers, with over 27,000 users now registered to access the NGC container registry."
The need for containers isn't limited to deep learning, either. Nvidia added that supercomputing has a "dire" need to simplify the deployment of applications across all the segments. That's because almost all supercomputing centres use environment modules to build, deploy, and launch applications.
"This is a time consuming, and unproductive approach which can take days, making it unproductive for both the system administrators and the end-users," said the firm.
"The complexity of such installs in supercomputing limits users from accessing the latest features and enjoying optimised performance, in turn delaying discoveries."
Containers are apparently a great alternative because installations are eliminated, which means no one has to keep track of or be concerned about breaking the environment module links. Users simply pull and run the container to deploy an application in minutes compared to waiting for days for the advisory council to agree on an install and go through the actual process.
This also means system administrators can focus on mission critical tasks rather than servicing and maintaining applications.