Microsoft builds huge cloud-based AI supercomputer for OpenAI

'Top 5' supercomputer features 285,000 processor cores and 10,000 GPUs

Microsoft announced on Tuesday that it has built a massive supercomputer for OpenAI, a company with a mission to build an artificial general intelligence that will benefit humanity as a whole.

The announcement came at Microsoft's Build 2020 developer conference, which is being hosted online due to the coronavirus pandemic.

Microsoft's new supercomputer is hosted in the Azure cloud and is claimed to be among the top five supercomputers in the world. It features 285,000 processor cores and 10,000 GPUs and can offer 400 gigabits per second of network connectivity for each GPU server, Microsoft says.

However, Microsoft did not share actual performance figures for its machine.

Supercomputers across the world are ranked by processing speed twice a year by experts at the Top500 project. The machines receive scores based on how fast they can perform Linpack test. IBM's Power System-based Summit is currently ranked as the world's most powerful supercomputer, reaching over 148,000 teraflops of speed.

According to Microsoft, its Azure machine is ideal for AI that learns from analysing billions of pages of publicly available text.

Microsoft says that the supercomputer's resources, as well as AI models and training tools, will be available to scientists, customers and developers through Azure AI and GitHub.

In July 2019, Microsoft announced its plan to invest $1bn in OpenAI to develop AI technologies for supercomputers.

OpenAI is a non-profit organisation co-founded by Elon Musk in 2015. It receives funding from venture capitalist Peter Thiel, LinkedIn co-founder Reid Hoffman and Y Combinator's Jessica Livingston, among others. It also has corporate ties to Amazon Web Services and IT services firm Infosys.

When OpenAI was announced, Musk said that the idea of this NPO was to develop AI technologies that would be used for good causes, reducing the risk of harm, by distributing AI technologies as widely as possible.

Microsoft officials also rolled out the next version of open source DeepSpeed deep learning library for Pytorch, which Microsoft said would reduce the computing power required for training models.

Microsoft also said that it would soon begin open-sourcing Turing models for AI as well as "recipes for training them in Azure Machine Learning". Microsoft's Turing models refer to a family of large AI models the company uses to improve language understanding across Office, Bing, Dynamics and other products.