NEWS
NEWS
NEWS
The next big breakthrough in supercomputing may not come from IBM Corp, Cray Inc. or any of the other original pioneers of the field but rather a new interagency task force formed through an executive order this week with the goal of building the world’s first exascale cluster. That represents a more than order of magnitude leap from the current reigning champion.
The 33.86-petaflop maximum speed of the Tianhe-2 supercomputer at the National Supercomputer Center in Guangzhou represents the cumulative processing power of some 80,000 chips that took 1,400 Chinese engineers several years to put together. Increasing that thirtyfold may not seem like too difficult of a proposition given the breakneck evolution of technology, but there are several major logistical hurdles standing in the way.
For starters, processors aren’t accelerating as fast as they used to. Fifty years after Intel Corp. co-founder Gordon Moore made his famous prediction about the doubling of transistor density every two years, chip makers are starting to bump against the physical limits of silicon, which recently forced his company to push back its next jump down the size scale to 2017.
Research on alternative computing technologies is making fast process at companies like IBM, but a commercially viable replacement for silicon isn’t likely to emerge in time for the project’s 2025 deadline. As a result, the slowing increases in processing density will have to be substituted with additional chips in the proposed supercomputer, a hugely expensive compromise that the effort will try to address.
That will be achieved not only by developing ways to make the silicon itself more efficient but also streamlining the space, energy and management requirements that will account for the bulk of the project’s costs over the course of its lifetime. And then there’s the matter of figuring how to put together the tens if not hundreds of thousands of processors that will be required for the cluster, which is not nearly as straightforward as simply stacking together a bunch of servers in a room.
The project will be led by US Department of Energy, Department of Defense and the National Science Foundation, the three heaviest users of supercomputing in the public sector, but the fruits of the initiative could benefit the government as a whole. The National Institutes of Health, the Department of Homeland Security and NOAA are only a few of the other agencies that also rely on supercomputers to carry out their work.
That is not to say the private sector will be excluded, however. After all, the processors for the cluster will have to be bought from somewhere, along with extra expertise to help meet the ten-year deadline. The newly formed National Strategic Computing Initiative has 90 days to submit the initial roadmap for the project.
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.