Obama signs executive order authorizing development of exascale supercomputers

Titan, former world’s fastest supercomputer (credit: Oak Ridge National Laboratory)

President Obama has signed an executive order authorizing the National Strategic Computing Initiative (NSCI), with the goal of creating the world’s fastest supercomputers. The NSCI is charged with building the world’s first-ever exascale* (1,000-petaflops) computer — 30 times faster than today’s fastest supercomputer.

The order mandates:

  1. Accelerating delivery of a capable exascale computing system that integrates hardware and software capability to deliver approximately 100 times the performance of current 10 petaflop systems across a range of applications representing government needs.
  2. Increasing coherence between the technology base used for modeling and simulation and that used for data analytic computing.
  3. Establishing, over the next 15 years, a viable path forward for future HPC systems even after the limits of current semiconductor technology are reached (the “post-Moore’s Law era”).
  4. Increasing the capacity and capability of an enduring national HPC ecosystem by employing a holistic approach that addresses relevant factors such as networking technology, workflow, downward scaling, foundational algorithms and software, accessibility, and workforce development.
  5. Developing an enduring public-private collaboration to ensure that the benefits of the research and development advances are, to the greatest extent, shared between the United States Government and industrial and academic sectors.

Regaining number 1

In 2013, the U.S lost its position as having the world’s fastest supercomputer — Titan, with 17.59 petaflop/s (quadrillions of calculations per second) Rmax on the Linpack benchmark — to China with its Tianhe-2, a supercomputer with 33.86 petaflop/s, developed by China’s National University of Defense Technology, according to the TOP500 lists of the world’s most powerful supercomputers.

There are three lead agencies for the NSCI:  the Department of Energy (DOE), the Department of Defense (DOD), and the National Science Foundation (NSF).  There are also two foundational research and development agencies for the NSCI:  the Intelligence Advanced Research Projects Activity (IARPA) and the National Institute of Standards and Technology (NIST).

* Exa: 1018; peta: 1015

IBM announces first 7nm node test chips

7nm node test chips (credit: Darryl Bautista/IBM)

IBM Research has announced the semiconductor industry’s first 7nm (nanometer) node test chips, which could allow for chips with more than 20 billion transistors, IBM believes — a big step forward from today’s most advanced chips, made using 14nm technology.

IBM achieved the 7 nm node through a combination of new materials, tools and techniques, explained Mukesh Khare, VP, IBM Semiconductor Technology Research in a blog post. “In materials, we’re using silicon germanium for the first time in the channels on the chips that conduct electricity. We have employed a new type of lithography in the chip-making process, Extreme Ultraviolet, or EUV, which delivers order-of-magnitude improvements over today’s mainstream optical lithography.”

However, as future technology starts to hit the quantum wall, “there’s no clear path to extend the life of the silicon semiconductor further into the future,” he noted.  “The next major wave of progress, the 5 nm node, will be even more challenging than the 7 nm node has been.”

IBM 7nm node test chip closeup (credit: Darryl Bautista/IBM)

Meanwhile, industry experts consider 7nm technology crucial to meeting the anticipated demands of future cloud computing and Big Data systems, cognitive computingmobile products and other emerging technologies, says IBM. Part of IBM’s $3 billion, five-year investment in chip R&D (announced in 2014), this accomplishment was the result of a  public-private partnership with New York State and joint development alliance with GLOBALFOUNDRIES, Samsung, and equipment suppliers.

When will it be available in products? IBM “declined to speculate on when it might begin commercial manufacturing of this technology generation,” The New York Times reports. Intel’s public roadmap indicates that it’s also working on a 7 nanometer chip, Wired notes.

Neuroscientists create organic-computing ‘Brainet’ network of rodent and primate brains — humans next

Experimental apparatus scheme for a Brainet computing device. A Brainet of four interconnected brains is shown. The arrows represent the flow of information through the Brainet. Inputs were delivered (red) as simultaneous intracortical microstimulation (ICMS) patterns (via implanted electrodes) to the somatosensory cortex of each rat. Neural activity (black) was then recorded and analyzed in real time. Rats were required to synchronize their neural activity with the other Brainet participants to receive water. (credit: Miguel Pais-Vieira et al./Scientific Reports)

Duke University neuroscientists have created a network called “Brainet” that uses signals from an array of electrodes implanted in the brains of multiple rodents in experiments to merge their collective brain activity and jointly control a virtual avatar arm or even perform sophisticated computations — including image pattern recognition and even weather forecasting.

Brain-machine interfaces (BMIs) are computational systems that allow subjects to use their brain signals to directly control the movements of artificial devices, such as robotic arms, exoskeletons or virtual avatars. The Duke researchers at the Center for Neuroengineering previously built BMIs to capture and transmit the brain signals of individual rats, monkeys, and even human subjects, to control devices.

“Supra-brain” — the Matrix for monkeys?

As reported in two open-access papers in the July 9th 2015 issue of Scientific Reports, in the new research, rhesus monkeys were outfitted with electrocorticographic (ECoG) multiple-electrode arrays implanted in their motor and somatosensory cortices to capture and transmit their brain activity.

For one experiment, two monkeys were placed in separate rooms where they observed identical images of an avatar on a display monitor in front of them, and worked together to move the avatar on the screen to touch a moving target.

In another experiment, three monkeys were able to mentally control three degrees of freedom (dimensions) of a virtual arm movement in 3-D space. To achieve this performance, all three monkeys had to synchronize their collective brain activity to produce a “supra-brain” in charge of generating the 3-D movements of the virtual arm.

In the second Brainet study, three to four rats whose brains have been interconnected via pairwise brain-to-brain interfaces (BtBIs) were able to perform a variety of sophisticated shared classification and other computational tasks in a distributed, parallel computing architecture.

Human Brainets next

These results support the original claim of the Duke researchers that brainets may serve as test beds for the development of organic computers created by interfacing multiple animals brains with computers. This arrangement would employ a unique hybrid digital-analog computational engine as the basis of its operation, in a clear departure from the classical digital-only mode of operation of modern computers.

“This is the first demonstration of a shared brain-machine interface, said Miguel Nicolelis, M.D., Ph. D., co-director of the Center for Neuroengineering at the Duke University School of Medicine and principal investigator of the study. “We foresee that shared-BMIs will follow the same track and soon be translated to clinical practice.”

Nicolelis and colleagues of the Walk Again Project, based at the project’s laboratory in Brazil, are currently working to implement a non-invasive human Brainet to be employed in their neuro-rehabilitation training paradigm with severely paralyzed patients.


In this movie, three monkeys share control over the movement of a virtual arm in 3-D space. Each monkey contributes to two of three axes (X, Y and Z). Monkey C contributes to y- and z-axes (red dot), monkey M contributes to x- and y-axes (blue dot), and monkey K contributes to y- and z-axes (green dot). The contribution of the two monkeys to each axis is averaged to determine the arm position (represented by the black dot). (credit: Arjun Ramakrishnan et al./Scientific Reports)


Abstract of Building an organic computing device with multiple interconnected brains

Recently, we proposed that Brainets, i.e. networks formed by multiple animal brains, cooperating and exchanging information in real time through direct brain-to-brain interfaces, could provide the core of a new type of computing device: an organic computer. Here, we describe the first experimental demonstration of such a Brainet, built by interconnecting four adult rat brains. Brainets worked by concurrently recording the extracellular electrical activity generated by populations of cortical neurons distributed across multiple rats chronically implanted with multi-electrode arrays. Cortical neuronal activity was recorded and analyzed in real time, and then delivered to the somatosensory cortices of other animals that participated in the Brainet using intracortical microstimulation (ICMS). Using this approach, different Brainet architectures solved a number of useful computational problems, such as discrete classification, image processing, storage and retrieval of tactile information, and even weather forecasting. Brainets consistently performed at the same or higher levels than single rats in these tasks. Based on these findings, we propose that Brainets could be used to investigate animal social behaviors as well as a test bed for exploring the properties and potential applications of organic computers.

Abstract of Computing arm movements with a monkey Brainet

Traditionally, brain-machine interfaces (BMIs) extract motor commands from a single brain to control the movements of artificial devices. Here, we introduce a Brainet that utilizes very-large-scale brain activity (VLSBA) from two (B2) or three (B3) nonhuman primates to engage in a common motor behaviour. A B2 generated 2D movements of an avatar arm where each monkey contributed equally to X and Y coordinates; or one monkey fully controlled the X-coordinate and the other controlled the Y-coordinate. A B3 produced arm movements in 3D space, while each monkey generated movements in 2D subspaces (X-Y, Y-Z, or X-Z). With long-term training we observed increased coordination of behavior, increased correlations in neuronal activity between different brains, and modifications to neuronal representation of the motor plan. Overall, performance of the Brainet improved owing to collective monkey behaviour. These results suggest that primate brains can be integrated into a Brainet, which self-adapts to achieve a common motor goal.

Next-generation energy-efficient light-based computers

Infrared light enters this silicon structure from the left. The cut-out patterns, determined by an algorithm, route two different wavelengths of this light into the two pathways on the right. (credit: Alexander Piggott)

Stanford University engineers have developed a new design algorithm that can automate the process of designing optical interconnects, which could lead to faster, more energy-efficient computers that use light rather than electricity for internal data transport.

Light can transmit more data while consuming far less power than electricity. According to a study by David Miller, the MIT W.M. Keck Foundation Professor of Electrical Engineering, up to 80 percent of microprocessor power is consumed by sending data over interconnects (wires that connect chips).

In addition, “for chip-scale links, light can carry more than 20 times as much data,” said Stanford graduate student Alexander Y. Piggott, lead author of a Nature Photonics article.

However, designing optical interconnects (using silicon fiber-optics cables) is complex and requires custom design for each interconnect. Given that thousands of interconnects are needed for each electronic system, optical data transport has remained impractical.

Optimized design of optical interconnects

Now the Stanford engineers believe they’ve broken that bottleneck by inventing what they call an “inverse design algorithm.” It works as the name suggests: the engineers specify what they want the optical circuit to do, and the software provides the details of how to fabricate a silicon structure to perform the task.

The wavelength demultiplexer developed by the Stanford team comprised one input waveguide, two output waveguides, and a chip for switching outputs based on incoming wavelengths (credit: Alexander Y. Piggott et al./Nature Photonics)

“We used the algorithm to design a working optical circuit and made several copies in our lab,” said Jelena Vuckovic, a Stanford professor of electrical engineering and senior author of the article.

The optical circuit they created was a silicon wavelength demultiplexer (which splits incoming light into multiple channels based on the wavelengths of the light). The device split 1,300 nm and 1,550 nm light from an input waveguide into two output waveguides.

(“Multiplexing” allows for multiple signals to be transmitted over a thin fiber-optic cable, which is how the Internet and cable television is able to transmit massive amounts of data, not possible with wires.)

The engineers note that once the algorithm has calculated the proper shape for the task, standard scalable industrial processes can be used to transfer that pattern onto silicon. The device footprint is only 2.8 x 2.8 micrometers, making this the smallest dielectric wavelength splitter to date.

The researchers envision other potential applications for their inverse design algorithm, including high-bandwidth optical communications, compact microscopy systems, and ultra-secure quantum communications.


Abstract of Inverse design and demonstration of a compact and broadband on-chip wavelength demultiplexer

Integrated photonic devices are poised to play a key role in a wide variety of applications, ranging from optical interconnects and sensors to quantum computing. However, only a small library of semi-analytically designed devices is currently known. Here, we demonstrate the use of an inverse design method that explores the full design space of fabricable devices and allows us to design devices with previously unattainable functionality, higher performance and robustness, and smaller footprints than conventional devices. We have designed a silicon wavelength demultiplexer that splits 1,300 nm and 1,550 nm light from an input waveguide into two output waveguides, and fabricated and characterized several devices. The devices display low insertion loss (∼2 dB), low crosstalk (<−11 dB) and wide bandwidths (>100 nm). The device footprint is 2.8 × 2.8 μm2, making this the smallest dielectric wavelength splitter.

Combining light and sound to create nanoscale optical waveguides

Researchers have shown that a DC voltage applied to layers of graphene and boron nitride can be used to control light emission from a nearby atom. Here, graphene is represented by a maroon-colored top layer; boron nitride is represented by yellow-green lattices below the graphene; and the atom is represented by a grey circle. A low concentration of DC voltage (in blue) allows the light to propagate inside the boron nitride, forming a tightly confined waveguide for optical signals. (credit: Anshuman Kumar Srivastava and Jose Luis Olivares/MIT)

In a new discovery that could lead to chips that combine optical and electronic components, researchers at MIT, IBM and two universities have found a way to combine light and sound with far lower losses than when such devices are made separately and then interconnected, they say.

Light’s interaction with graphene produces vibrating electron particles called plasmons, while light interacting with hexagonal boron nitride (hBN) produces phonons (sound “particles”). Fang and his colleagues found that when the materials are combined in a certain way, the plasmons and phonons can couple, producing a strong resonance.

The properties of the graphene allow precise control over light, while hBN provides very strong confinement and guidance of the light. Combining the two makes it possible to create new “metamaterials” that marry the advantages of both, the researchers say.

The work is co-authored by MIT associate professor of mechanical engineering Nicholas Fang and graduate student Anshuman Kumar, and their co-authors at IBM’s T.J. Watson Research Center, Hong Kong Polytechnic University, and the University of Minnesota.

According to Phaedon Avouris, a researcher at IBM and co-author of the paper, “The combination of these two materials provides a unique system that allows the manipulation of optical processes.”

The two materials are structurally similar — both composed of hexagonal arrays of atoms that form two-dimensional sheets — but they each interact with light quite differently. The researchers found that these interactions can be complementary, and can couple in ways that afford a great deal of control over the behavior of light.

The hybrid material blocks light when a particular voltage is applied to the graphene layer. When a different voltage is applied, a special kind of emission and propagation, called “hyperbolicity” occurs. This phenomenon has not been seen before in optical systems, Fang says.

Nanoscale optical waveguides

The result: an extremely thin sheet of material can interact strongly with light, allowing beams to be guided, funneled, and controlled by different voltages applied to the sheet.

The combined materials create a tuned system that can be adjusted to allow light only of certain specific wavelengths or directions to propagate, they say.

These properties should make it possible, Fang says, to create tiny optical waveguides, about 20 nanometers in size —- the same size range as the smallest features that can now be produced in microchips.

“Our work paves the way for using 2-D material heterostructures for engineering new optical properties on demand,” says co-author Tony Low, a researcher at IBM and the University of Minnesota.

Single-molecule optical resolution

Another potential application, Fang says, comes from the ability to switch a light beam on and off at the material’s surface; because the material naturally works at near-infrared wavelengths, this could enable new avenues for infrared spectroscopy, he says. “It could even enable single-molecule resolution,” Fang says, of biomolecules placed on the hybrid material’s surface.

Sheng Shen, an assistant professor of mechanical engineering at Carnegie Mellon University who was not involved in this research, says, “This work represents significant progress on understanding tunable interactions of light in graphene-hBN.” The work is “pretty critical” for providing the understanding needed to develop optoelectronic or photonic devices based on graphene and hBN, he says, and “could provide direct theoretical guidance on designing such types of devices. … I am personally very excited about this novel theoretical work.”

The research team also included Kin Hung Fung of Hong Kong Polytechnic University. The work was supported by the National Science Foundation and the Air Force Office of Scientific Research.


Abstract of Tunable Light–Matter Interaction and the Role of Hyperbolicity in Graphene–hBN System

Hexagonal boron nitride (hBN) is a natural hyperbolic material, which can also accommodate highly dispersive surface phonon-polariton modes. In this paper, we examine theoretically the mid-infrared optical properties of graphene–hBN heterostructures derived from their coupled plasmon–phonon modes. We find that the graphene plasmon couples differently with the phonons of the two Reststrahlen bands, owing to their different hyperbolicity. This also leads to distinctively different interaction between an external quantum emitter and the plasmon–phonon modes in the two bands, leading to substantial modification of its spectrum. The coupling to graphene plasmons allows for additional gate tunability in the Purcell factor and narrow dips in its emission spectra.