IARPA Spurs Race to Speed Cryogenic Computing Reality

The race is on to carve a path to efficient extreme-scale machines in the next five years but existing processing approaches fall far short of the efficiency and performance targets required. As we reported at the end of 2016, the Department of Energy in the U.S. is keeping its eye on non-standard processing approaches for one of its exascale-class systems by 2021, and other groups, including the IEEE are equally keeping pace with new architectures to explore as CMOS alternatives.

While there is no silver bullet technology yet that we expect will sweep current computing norms, superconducting circuits appear to be garnering more attention and investment from large institutions, including IARPA—the Intelligence Advanced Research Projects Activity agency in the U.S.

IARPA is currently backing the Cryogenic Computing Complexity (C3) program, which was announced in 2014 and is ongoing. As Marc Manheimer, C3 program manager explains, “Computers based on superconducting logic integrated with new kinds of cryogenic memory will allow expansion of current computing facilities while staying within space and energy budgets, and may enable supercomputer development beyond exascale.” We will be publishing a deeper interview with Manheimer later this week to explore questions about the producibility of such systems at scale, the programming complexity, and other matters related to the practicality of such an approach, but there are other efforts underway to bring the vision of the C3 program closer to reality.

“The goal of the C3 program is to establish superconducting computing as a long-term solution to the power-cooling problem and a successor to end-of-roadmap CMOS for high performance computing. While, in the past, significant technical obstacles prevented serious exploration of superconducting computing, recent innovations have created conditions for a major breakthrough. These include new families of supercomputing logic without static power dissipation and new ideas for energy efficient cryogenic memory. A superconducting computer also promises a simplified cooling infrastructure and greatly reduced footprint.”

A new companion effort that sits alongside C3, called the SuperTools program, will push for faster innovation on superconducting circuit implementation, production, and software. According to a report from Sandia National Lab, this five-year program (ending just shy of the 2021 timeline to deliver an exascale system based on a novel architecture) will “evaluate the tools and fabricate test circuits to compare with simulated results.” The teams have the goal of producing and testing a “high-speed, low-energy 64-bit RISC processor using Josephson Junction based logic cells.”

Testing and evaluation efforts are being led by teams from NIST, MIT Lincoln Lab, as well as Berkeley and Sandia national labs. The focus of the SuperTools program will be mainly on developing the EDA tools required to allow for scaling beyond current limits in superconducting system-on-chips, leading to 3D integrated circuits. Teams at Lawrence Berkeley National Lab will be tasked with the in-house developed RISC-V architecture and will provide a GFI based on two implementations of the architecture, the 32-bit Z-scale and the 64-bit Rocket.

In the second phase of the program, the LBNL team is tasked with delivering “a core or cores base on the Rocket generator, which generates configurable 5-stage pipeline cores with 32, 64, or 128 bits and optional single or double precision or fixed-point math in the pipeline.” In addition to being interesting in terms of this project, it signals what lies ahead for RISC-V as an architecture in the coming years.

MIT Lincoln Lab will be focused on providing fab assistance via their foundry and will be the fab site for the program’s devices and test circuits, in addition to providing software tool development. NIST will also fabricate and test devices and circuits from their Boulder-based microfabrication facility and match simulation results with their own test runs. Sandia’s role will be focused on delivering the EDA tools—something it already has experience with for its own internal CMOS development for NNSA and other workloads.

While the effort is now grounded between the institutions, there are some perceived trouble spots ahead. As the lead author of the Sandia report on SuperTools, Tom Mannos, notes, “fabrication and test are limited to small test devices and circuits for model calibration, providing insufficient ground truth to evaluate the accuracy of physical design and verification tools for chip-level phenomena and macroscopic circuit behavior.” Among other technical challenges is that “testing will take place in a liquid helium environment, whereas production of circuits will operate in a cryostat; this may lead to unrealistic modeling and testing of thermal management approaches.”

As noted previously, we will provide more background on the current technical and producibility hurdles for cryogenic computing via an interview with the C3 program lead this week. This effort, like others that have spurred innovation among various initiatives, is fed from the 2015 NCSI, which pushes exascale innovation in a sustainable power envelope.

The race is on to reach exascale in the 2020s, which we have described in several articles here at The Next Platform, but if there is anything that has become clear over the last few years, it is that existing CMOS technologies might not be the most efficient path. The focus on novel architectures for high performance computing will continue to gather force throughout 2017, just as we expect an equal attention on the topic to fit other areas in computing, particularly machine learning at scale.



Start typing and press Enter to search