Many of the world’s largest scientific endeavours, from climate change research to space exploration, rely on powerful supercomputers to process large quantities of data. Within the next decade, a new U.S. supercomputer may give scientists a huge power boost.
On July 30, President Obama signed an executive order to establish the National Strategic Computing Initiative (NSCI) with the goal of building the world’s first exascale supercomputer by 2025.
The initiative aims to solidify U.S. leadership in the field of high power computing (HPC) by creating demand for HPC software and hardware developers.
Though today’s supercomputers are relatively rare, the ultimate goal is to expand the applications of HPC and bring them into mainstream use.
Scientists already work on research projects that generate massive amounts of data. The Human Brain Project uses computers to simulate the brain’s inner workings, the Large Hadron Collider collects data on high-speed collisions of subatomic particles, and climate models map global atmospheric conditions at increasingly finer resolutions.
More powerful supercomputers could help answer some big scientific questions in these and other areas. HPC also has industrial uses, such as testing fluid dynamics of new aircraft and automobile designs. Testing a prototype in a virtual setting can accelerate product development and reduce costs at the same time.
One quintillion=one billion billions=1,000,000,000,000,000,000
The proposed exascale computer represents a significant increase in supercomputer performance. It would be capable of completing one quintillion floating point operations per second (1018 FLOPS, or one exaflop). This would be nearly 30 times more powerful than the world’s current leader, China’s Tianhe-2.
One quintillion seems like an astoundingly large number, but global data consumption is already measured in exabytes. Cisco’s Virtual Network Index estimates that 2015 global Internet traffic will reach 869 exabytes, and will pass one zettabyte (1021 bytes) in 2016. A new class of supercomputer is necessary to analyse data on such a large scale.
Building an exascale supercomputer is not as simple as attaching smaller computers together. Reaching that level of power will require new designs in the architecture of computer processors so that tasks are completed efficiently.
Supercomputers require large supplies of energy to operate; a scaled-up version of existing supercomputers would need its own power plant. Even after an exascale supercomputer is built, creating programs that take advantage of its processing power will pose another challenge. Setting a goal for exascale computing is a first step, but many technological hurdles must first be overcome.
To cope with the development costs and design challenges, the Obama initiative will combine the efforts of the Department of Energy, the Department of Defence, and the National Science Foundation. These agencies have experience building supercomputers for use in the national laboratories and in research universities around the United States. However, the White House aims to broaden the scope of supercomputing along with its scale.
The NSCI Fact Sheet states that agencies will “make HPC resources more readily available so that scientific researchers in both the public and private sectors have ready access.” Developing technology to the push limits of computing performance could make supercomputers more accessible at the same time.