Best of MOOGFEST 2017

The Moogfest four-day festival in Durham, North Carolina next weekend (May 18 — 21) explores the future of technology, art, and music. Here are some of the sessions that may be especially interesting to KurzweilAI readers. Full #Moogfest2017 Program Lineup.

Culture and Technology

(credit: Google)

The Magenta by Google Brain team will bring its work to life through an interactive demo plus workshops on the creation of art and music through artificial intelligence.

Magenta is a Google Brain project to ask and answer the questions, “Can we use machine learning to create compelling art and music? If so, how? If not, why not?” It’s first a research project to advance the state-of-the art and creativity in music, video, image and text generation and secondly, Magenta is building a community of artists, coders, and machine learning researchers.

The interactive demo will go through a improvisation along with the machine learning models, much like the Al Jam Session. The workshop will cover how to use the open source library to build and train models and interact with them via MIDI.

Technical reference: Magenta: Music and Art Generation with Machine Intelligence


TEDx Talks | Music and Art Generation using Machine Learning | Curtis Hawthorne | TEDxMountainViewHighSchool


Miguel Nicolelis (credit: Duke University)

Miguel A. L. Nicolelis, MD, PhD will discuss state-of-the-art research on brain-machine interfaces, which make it possible for the brains of primates to interact directly and in a bi-directional way with mechanical, computational and virtual devices. He will review a series of recent experiments using real-time computational models to investigate how ensembles of neurons encode motor information. These experiments have revealed that brain-machine interfaces can be used not only to study fundamental aspects of neural ensemble physiology, but they can also serve as an experimental paradigm aimed at testing the design of novel neuroprosthetic devices.

He will also explore research that raises the hypothesis that the properties of a robot arm, or other neurally controlled tools, can be assimilated by brain representations as if they were extensions of the subject’s own body.

Theme: Transhumanism


Dervishes at Royal Opera House with Matthew Herbert (credit: ?)

Andy Cavatorta (MIT Media Lab) will present a conversation and workshop on a range of topics including the four-century history of music and performance at the forefront of technology. Known as the inventor of Bjork’s Gravity Harp, he has collaborated on numerous projects to create instruments using new technologies that coerce expressive music out of fire, glass, gravity, tiny vortices, underwater acoustics, and more. His instruments explore technologically mediated emotion and opportunities to express the previously inexpressible.

Theme: Instrument Design


Berklee College of Music

Michael Bierylo (credit: Moogfest)

Michael Bierylo will present his Modular Synthesizer Ensemble alongside the Csound workshops from fellow Berklee Professor Richard Boulanger.

Csound is a sound and music computing system originally developed at MIT Media Lab and can most accurately be described as a compiler or a software that takes textual instructions in the form of source code and converts them into object code which is a stream of numbers representing audio. Although it has a strong tradition as a tool for composing electro-acoustic pieces, it is used by composers and musicians for any kind of music that can be made with the help of the computer and has traditionally being used in a non-interactive score driven context, but nowadays it is mostly used in in a real-time context.

Michael Bierylo serves as the Chair of the Electronic Production and Design Department, which offers students the opportunity to combine performance, composition, and orchestration with computer, synthesis, and multimedia technology in order to explore the limitless possibilities of musical expression.


Berklee College of Music | Electronic Production and Design (EPD) at Berklee College of Music


Chris Ianuzzi (credit: William Murray)

Chris Ianuzzi, a synthesist of Ciani-Musica and past collaborator with pioneers such as Vangelis and Peter Baumann, will present a daytime performance and sound exploration workshops with the B11 braininterface and NeuroSky headset–a Brainwave Sensing Headset.

Theme: Hacking Systems


Argus Project (credit: Moogfest)

The Argus Project from Gan Golan and Ron Morrison of NEW INC is a wearable sculpture, video installation and counter-surveillance training, which directly intersects the public debate over police accountability. According to ancient Greek myth, Argus Panoptes was a giant with 100 eyes who served as an eternal watchman, both for – and against – the gods.

By embedding an array of camera “eyes” into a full body suit of tactical armor, the Argus exo-suit creates a “force field of accountability” around the bodies of those targeted. While some see filming the police as a confrontational or subversive act, it is in fact, a deeply democratic one.  The act of bearing witness to the actions of the state – and showing them to the world – strengthens our society and institutions. The Argus Project is not so much about an individual hero, but the Citizen Body as a whole. In between one of the music acts, a presentation about the project will be part of the Protest Stage.

Argus Exo Suit Design (credit: Argus Project)

Theme: Protest


Found Sound Nation (credit: Moogfest)

Democracy’s Exquisite Corpse from Found Sound Nation and Moogfest, an immersive installation housed within a completely customized geodesic dome, is a multi-person instrument and music-based round-table discussion. Artists, activists, innovators, festival attendees and community engage in a deeply interactive exploration of sound as a living ecosystem and primal form of communication.

Within the dome, there are 9 unique stations, each with their own distinct set of analog or digital sound-making devices. Each person’s set of devices is chained to the person sitting next to them, so that everybody’s musical actions and choices affect the person next to them, and thus affect everyone else at the table. This instrument is a unique experiment in how technology and the instinctive language of sound can play a role in the shaping of a truly collective unconscious.

Theme: Protest


(credit: Land Marking)

Land Marking, from Halsey Burgund and Joe Zibkow of MIT Open Doc Lab, is a mobile-based music/activist project that augments the physical landscape of protest events with a layer of location-based audio contributed by event participants in real-time. The project captures the audioscape and personal experiences of temporary, but extremely important, expressions of discontent and desire for change.

Land Marking will be teaming up with the Protest Stage to allow Moogfest attendees to contribute their thoughts on protests and tune into an evolving mix of commentary and field recordings from others throughout downtown Durham. Land Marking is available on select apps.

Theme: Protest


Taeyoon Choi (credit: Moogfest)

Taeyoon Choi, an artist and educator based in New York and Seoul, who will be leading a Sign Making Workshop as one of the Future Thought leaders on the Protest Stage. His art practice involves performance, electronics, drawings and storytelling that often leads to interventions in public spaces.

Taeyoon will also participate in the Handmade Computer workshop to build a1 Bit Computer, which demonstrates how binary numbers and boolean logic can be configured to create more complex components. On their own these components aren’t capable of computing anything particularly useful, but a computer is said to be Turing complete if it includes all of them, at which point it has the extraordinary ability to carry out any possible computation. He has participated in numerous workshops at festivals around the world, from Korea to Scotland, but primarily at the School for Poetic Computation (SFPC) — an artist run school co-founded by Taeyoon in NYC. Taeyoon Choi’s Handmade Computer projects.

Theme: Protest


(credit: Moogfest)

irlbb from Vivan Thi Tang, connects individuals after IRL (in real life) interactions and creates community that otherwise would have been missed. With a customized beta of the app for Moogfest 2017, irlbb presents a unique engagement opportunity.

Theme: Protest


Ryan Shaw and Michael Clamann (credit: Duke University)

Duke Professors Ryan Shaw, and Michael Clamann will lead a daily science pub talk series on topics that include future medicine, humans and anatomy, and quantum physics.

Ryan is a pioneer in mobile health—the collection and dissemination of information using mobile and wireless devices for healthcare–working with faculty at Duke’s Schools of Nursing, Medicine and Engineering to integrate mobile technologies into first-generation care delivery systems. These technologies afford researchers, clinicians, and patients a rich stream of real-time information about individuals’ biophysical and behavioral health in everyday environments.

Michael Clamann is a Senior Research Scientist in the Humans and Autonomy Lab (HAL) within the Robotics Program at Duke University, an Associate Director at UNC’s Collaborative Sciences Center for Road Safety, and the Lead Editor for Robotics and Artificial Intelligence for Duke’s SciPol science policy tracking website. In his research, he works to better understand the complex interactions between robots and people and how they influence system effectiveness and safety.

Theme: Hacking Systems


Dave Smith (credit: Moogfest)

Dave Smith, the iconic instrument innovator and Grammy-winner, will lead Moogfest’s Instruments Innovators program and host a headlining conversation with a leading artist revealed in next week’s release. He will also host a masterclass.

As the original founder of Sequential Circuits in the mid-70s and Dave designed the Prophet-5––the world’s first fully-programmable polyphonic synth and the first musical instrument with an embedded microprocessor. From the late 1980’s through the early 2000’s he has worked to develop next level synths with the likes of the Audio Engineering Society, Yamaha, Korg, Seer Systems (for Intel). Realizing the limitations of software, Dave returned to hardware and started Dave Smith Instruments (DSI), which released the Evolver hybrid analog/digital synthesizer in 2002. Since then the DSI product lineup has grown to include the Prophet-6, OB-6, Pro 2, Prophet 12, and Prophet ’08 synthesizers, as well as the Tempest drum machine, co-designed with friend and fellow electronic instrument designer Roger Linn.

Theme: Future Thought


Dave Rossum, Gerhard Behles, and Lars Larsen (credit: Moogfest)

EM-u Systems Founder Dave Rossum, Ableton CEO Gerhard Behles, and LZX Founder Lars Larsen will take part in conversations as part of the Instruments Innovators program.

Driven by the creative and technological vision of electronic music pioneer Dave Rossum, Rossum Electro-Music creates uniquely powerful tools for electronic music production and is the culmination of Dave’s 45 years designing industry-defining instruments and transformative technologies. Starting with his co-founding of E-mu Systems, Dave provided the technological leadership that resulted in what many consider the premier professional modular synthesizer system–E-mu Modular System–which became an instrument of choice for numerous recording studios, educational institutions, and artists as diverse as Frank Zappa, Leon Russell, and Hans Zimmer. In the following years, worked on developing Emulator keyboards and racks (i.e. Emulator II), Emax samplers, the legendary SP-12 and SP-1200 (sampling drum machines), the Proteus sound modules and the Morpheus Z-Plane Synthesizer.

Gerhard Behles co-founded Ableton in 1999 with Robert Henke and Bernd Roggendorf. Prior to this he had been part of electronic music act “Monolake” alongside Robert Henke, but his interest in how technology drives the way music is made diverted his energy towards developing music software. He was fascinated by how dub pioneers such as King Tubby ‘played’ the recording studio, and began to shape this concept into a music instrument that became Ableton Live.

LZX Industries was born in 2008 out of the Synth DIY scene when Lars Larsen of Denton, Texas and Ed Leckie of Sydney, Australia began collaborating on the development of a modular video synthesizer. At that time, analog video synthesizers were inaccessible to artists outside of a handful of studios and universities. It was their continuing mission to design creative video instruments that (1) stay within the financial means of the artists who wish to use them, (2) honor and preserve the legacy of 20th century toolmakers, and (3) expand the boundaries of possibility. Since 2015, LZX Industries has focused on the research and development of new instruments, user support, and community building.


Science

ATLAS detector (credit: Kaushik De, Brookhaven National Laboratory)

ATLAS @ CERN. The full ATLAS @ CERN program will be led by Duke University Professors Mark Kruse andKatherine Hayles along with ATLAS @ CERN Physicist Steven Goldfarb.

The program will include a “Virtual Visit” to the Large Hadron Collider — the world’s largest and most powerful particle accelerator — via a live video session,  a ½ day workshop analyzing and understanding LHC data, and a “Science Fiction versus Science Fact” live debate.

The ATLAS experiment is designed to exploit the full discovery potential and the huge range of physics opportunities that the LHC provides. Physicists test the predictions of the Standard Model, which encapsulates our current understanding of what the building blocks of matter are and how they interact – resulting in one such discoveries as the Higgs boson. By pushing the frontiers of knowledge it seeks to answer to fundamental questions such as: What are the basic building blocks of matter? What are the fundamental forces of nature? Could there be a greater underlying symmetry to our universe?

“Atlas Boogie” (referencing Higgs Boson):

ATLAS Experiment | The ATLAS Boogie

(credit: Kate Shaw)

Kate Shaw (ATLAS @ CERN), PhD, in her keynote, titled “Exploring the Universe and Impacting Society Worldwide with the Large Hadron Collider (LHC) at CERN,” will dive into the present-day and future impacts of the LHC on society. She will also share findings from the work she has done promoting particle physics in developing countries through her Physics without Frontiers program.

The ATLAS experiment is designed to exploit the full discovery potential and the huge range of physics opportunities that the LHC provides. Physicists test the predictions of the Standard Model, which encapsulates our current understanding of what the building blocks of matter are and how they interact – resulting in one such discoveries as the Higgs boson. By pushing the frontiers of knowledge it seeks to answer to fundamental questions such as: What are the basic building blocks of matter? What are the fundamental forces of nature? Could there be a greater underlying symmetry to our universe?

Theme: Future Thought


Arecibo (credit: Joe Davis/MIT)

In his keynote, Joe Davis (MIT) will trace the history of several projects centered on ideas about extraterrestrial communications that have given rise to new scientific techniques and inspired new forms of artistic practice. He will present his “swansong” — an interstellar message that is intended explicitly for human beings rather than for aliens.

Theme: Future Thought


Immortality bus (credit: Zoltan Istvan)

Zoltan Istvan (Immortality Bus), the former U.S. Presidential candidate for the Transhumanist party and leader of the Transhumanist movement, will explore the path to immortality through science with the purpose of using science and technology to radically enhance the human being and human experience. His futurist work has reached over 100 million people–some of it due to the Immortality Bus which he recently drove across America with embedded journalists aboard. The bus is shaped and looks like a giant coffin to raise life extension awareness.


Zoltan Istvan | 1-min Hightlight Video for Zoltan Istvan Transhumanism Documentary IMMORTALITY OR BUST

Theme: Transhumanism/Biotechnology


(credit: Moogfest)

Marc Fleury and members of the Church of Space — Park Krausen, Ingmar Koch, and Christ of Veillon — return to Moogfest for a second year to present an expanded and varied program with daily explorations in modern physics with music and the occult, Illuminati performances, theatrical rituals to ERIS, and a Sunday Mass in their own dedicated “Church” venue.

Theme: Techno-Shamanism

#Moogfest2017

A deep-learning tool that lets you clone an artistic style onto a photo

The Deep Photo Style Transfer tool lets you add artistic style and other elements from a reference photo onto your photo. (credit: Cornell University)

“Deep Photo Style Transfer” is a cool new artificial-intelligence image-editing software tool that lets you transfer a style from another (“reference”) photo onto your own photo, as shown in the above examples.

An open-access arXiv paper by Cornell University computer scientists and Adobe collaborators explains that the tool can transpose the look of one photo (such as the time of day, weather, season, and artistic effects) onto your photo, making it reminiscent of a painting, but that is still photorealistic.

The algorithm also handles extreme mismatch of forms, such as transferring a fireball to a perfume bottle. (credit: Fujun Luan et al.)

“What motivated us is the idea that style could be imprinted on a photograph, but it is still intrinsically the same photo, said Cornell computer science professor Kavita Bala. “This turned out to be incredibly hard. The key insight finally was about preserving boundaries and edges while still transferring the style.”

To do that, the researchers created deep-learning software that can add a neural network layer that pays close attention to edges within the image, like the border between a tree and a lake.

The software is still in the research stage.

Bala, Cornell doctoral student Fujun Luan, and Adobe collaborators Sylvian Paris and Eli Shechtman will present their paper at the Conference on Computer Vision and Pattern Recognition on July 21–26 in Honolulu.

This research is supported by a Google Faculty Re-search Award and NSF awards.


Abstract of Deep Photo Style Transfer

This paper introduces a deep-learning approach to photographic style transfer that handles a large variety of image content while faithfully transferring the reference style. Our approach builds upon the recent work on painterly transfer that separates style from the content of an image by considering different layers of a neural network. However, as is, this approach is not suitable for photorealistic style transfer. Even when both the input and reference images are photographs, the output still exhibits distortions reminiscent of a painting. Our contribution is to constrain the transformation from the input to the output to be locally affine in colorspace, and to express this constraint as a custom fully differentiable energy term. We show that this approach successfully suppresses distortion and yields satisfying photorealistic style transfers in a broad variety of scenarios, including transfer of the time of day, weather, season, and artistic edits.


Future ‘lightwave’ computers could run 100,000 times faster

TeraHertz pulses in semiconductor crystal (credit: Fabian Langer, Regensburg University)

Using extremely short pulses of teraHertz (THz) radiation instead of electrical currents could lead to future computers that run ten to 100,000 times faster than today’s state-of-the-art electronics, according to an international team of researchers, writing in the journal Nature Photonics.

In a conventional computer, electrons moving through a semiconductor occasionally run into other electrons, releasing energy in the form of heat and slowing them down. With the proposed “lightwave electronics” approach, electrons could be guided by ultrafast THz pulses (the part of the electromagnetic spectrum between microwaves and infrared light). That means the travel time can be so short that the electrons would be statistically unlikely to hit anything, according to senior author Rupert Huber, a professor of physics at the University of Regensburg who led the experiment.

In the experiment, the researchers shined THz pulses into a crystal of the semiconductor gallium selenide.* These pulses were ultra-short (less than 100 femtoseconds, or 100 quadrillionths of a second). Each pulse popped electrons in the semiconductor into a higher energy level — which meant that they were free to move around.

When the electrons emitted light as they came down from the higher energy level, they emitted much shorter pulses than the electromagnetic radiation going in — just a few femtoseconds long — quick enough to read and write information to electrons at ultra-high speed.

But first, researchers need to be able to control electrons in a semiconductor. This work takes a step toward this by mobilizing groups of electrons inside a semiconductor crystal.

Quantum computation

Because femtosecond pulses are fast enough to trap an electron between being put into an excited state and coming down from that state, they can potentially also be used for quantum computations, using electrons in excited states as qubits. The researchers managed to launch one electron simultaneously via two excitation pathways, which is not classically possible.

An electron is small enough that it behaves like a wave as well as a particle, and when it is in an excited state, its wavelength changes. Because the electron was in two excited states at once, those two waves interfered with one another and left a fingerprint in the femtosecond pulse that the electron emitted.

The research is funded by the European Research Council and the German Research Foundation.

* “We generated high harmonics by irradiating a 40-μm-thick crystal of gallium selenide with intense, multi-THz pulses. These pulses were obtained by difference frequency mixing of two phase-correlated near-infrared pulse trains from a dual optical parametric amplifier pumped by a titanium sapphire amplifier. … The centre frequency was tunable and set to 33 THz in the experiments.” — F. Langer et al./Nature Photonics

Abstract of Symmetry-controlled temporal structure of high-harmonic carrier fields from a bulk crystal

High-harmonic (HH) generation in crystalline solids marks an exciting development, with potential applications in high-efficiency attosecond sources, all-optical bandstructure reconstruction and quasiparticle collisions. Although the spectral and temporal shape of the HH intensity has been described microscopically, the properties of the underlying HH carrier wave have remained elusive. Here, we analyse the train of HH waveforms generated in a crystalline solid by consecutive half cycles of the same driving pulse. Extending the concept of frequency combs to optical clock rates, we show how the polarization and carrier-envelope phase (CEP) of HH pulses can be controlled by the crystal symmetry. For certain crystal directions, we can separate two orthogonally polarized HH combs mutually offset by the driving frequency to form a comb of even and odd harmonic orders. The corresponding CEP of successive pulses is constant or offset by π, depending on the polarization. In the context of a quantum description of solids, we identify novel capabilities for polarization- and phase-shaping of HH waveforms that cannot be accessed with gaseous sources.

Brain has more than 100 times higher computational capacity than previously thought, say UCLA scientists

Neuron (blue) with dendrites (credit: Shelley Halpain/UC San Diego)

The brain has more than 100 times higher computational capacity than was previously thought, a UCLA team has discovered.

Obsoleting neuroscience textbooks, this finding suggests that our brains are both analog and digital computers and could lead to new approaches for treating neurological disorders and developing brain-like computers, according to the researchers.

Illustration of neuron and dendrites. Dendrites receive electrochemical stimulation (via synapses, not shown here) from neurons (not shown here), and propagate that stimulation to the neuron cell body (soma). A neuron sends electrochemical stimulation via an axon to communicate with other neurons via telodendria (purple, right) at the end of the axon and synapses (not shown here). (credit: Quasar/CC).

Dendrites have been considered simple passive conduits of signals. But by working with animals that were moving around freely, the UCLA team showed that dendrites are in fact electrically active — generating nearly 10 times more spikes than the soma (neuron cell body).

Fundamentally changes our understanding of brain computation

The finding, reported in the March 9 issue of the journal Science, challenges the long-held belief that spikes in the soma are the primary way in which perception, learning and memory formation occur.

“Dendrites make up more than 90 percent of neural tissue,” said UCLA neurophysicist Mayank Mehta, the study’s senior author. “Knowing they are much more active than the soma fundamentally changes the nature of our understanding of how the brain computes information.”

“This is a major departure from what neuroscientists have believed for about 60 years,” said Mehta, a UCLA professor of physics and astronomy, of neurology and of neurobiology.

Because the dendrites are nearly 100 times larger in volume than the neuronal centers, Mehta said, the large number of dendritic spikes taking place could mean that the brain has more than 100 times the computational capacity than was previously thought.

Study with moving rats made discovery possible

Previous studies have been limited to stationary rats, because scientists have found that placing electrodes in the dendrites themselves while the animals were moving actually killed those cells. But the UCLA team developed a new technique that involves placing the electrodes near, rather than in, the dendrites.

Using that approach, the scientists measured dendrites’ activity for up to four days in rats that were allowed to move freely within a large maze. Taking measurements from the posterior parietal cortex, the part of the brain that plays a key role in movement planning, the researchers found far more activity in the dendrites than in the somas — approximately five times as many spikes while the rats were sleeping, and up to 10 times as many when they were exploring.

Looking at the soma to understand how the brain works has provided a framework for numerous medical and scientific questions — from diagnosing and treating diseases to how to build computers. But, Mehta said, that framework was based on the understanding that the cell body makes the decisions, and that the process is digital.

“What we found indicates that such decisions are made in the dendrites far more often than in the cell body, and that such computations are not just digital, but also analog,” Mehta said. “Due to technological difficulties, research in brain function has largely focused on the cell body. But we have discovered the secret lives of neurons, especially in the extensive neuronal branches. Our results substantially change our understanding of how neurons compute.”

Funding was provided by the University of California.

Complete neuron cell diagram (credit: LadyofHats/CC)


Abstract of Dynamics of cortical dendritic membrane potential and spikes in freely behaving rats

Neural activity in vivo is primarily measured using extracellular somatic spikes, which provide limited information about neural computation. Hence, it is necessary to record from neuronal dendrites, which generate dendritic action potentials (DAP) and profoundly influence neural computation and plasticity. We measured neocortical sub- and suprathreshold dendritic membrane potential (DMP) from putative distal-most dendrites using tetrodes in freely behaving rats over multiple days with a high degree of stability and sub-millisecond temporal resolution. DAP firing rates were several fold larger than somatic rates. DAP rates were modulated by subthreshold DMP fluctuations which were far larger than DAP amplitude, indicting hybrid, analog-digital coding in the dendrites. Parietal DAP and DMP exhibited egocentric spatial maps comparable to pyramidal neurons. These results have important implications for neural coding and plasticity.

IBM-led international research team stores one bit of data on a single atom

Scanning tunneling microscope image of a single atom of holmium, an element that researchers used as a magnet to store one bit of data. (credit: IBM Research — Almaden)

An international team led by IBM has created the world’s smallest magnet, using a single atom of rare-earth element holmium, and stored one bit of data on it over several hours.

The achievement represents the ultimate limit of the classical approach to high-density magnetic storage media, according to a paper published March 8 in the journal Nature.

Currently, hard disk drives use about 100,000 atoms to store a single bit. The ability to read and write one bit on one atom may lead to significantly smaller and denser storage devices in the future. (The researchers are currently working in an ultrahigh vacuum at 1.2 K (a temperature near absolute zero.)

Using a scanning tunneling microscope* (STM), the researchers also showed that a device using two magnetic atoms could be written and read independently, even when they were separated by just one nanometer.

IBM microscope mechanic Bruce Melior at scanning tunneling microscope, used to view and manipulate atoms (credit: IBM Research — Almaden)

The researchers believe this tight spacing could eventually yield magnetic storage that is 1,000 times denser than today’s hard disk drives and solid state memory chips. So they could one day store 1,000 times more information in the same space. That means data centers, computers, and personal devices would be radically smaller and more powerful.

Single-atom write and read operations. (Left) To write the data onto the holmium atom, a pulse of electric current from the magnetized tip of a scanning tunneling microscope (STM) is used to flip the orientation of the atom’s field between a 0 or 1. The STM is also used to read it. (Right) A second read-out method used an iron atom as a magnetic sensor, which also allowed the team to read out multiple bits at the same time, making it more practical than an STM. (credit: IBM Research and Fabian D. Natterer et al./Nature)

Researchers at EPFL in Switzerland, University of Chinese Academy of Sciences in Hong Kong, University of Göttingen in Germany, Universität Zürich in Switzerland, Institute of Basic Science, Center for Quantum Nanoscience in South Korea, and Ewha Womans University in South Korea were also on the research team.

* The STM was developed in 1981, earning its inventors, Gerd Binnig and Heinrich Rohrer (at IBM Zürich), the Nobel Prize in Physics in 1986. IBM is planning future scanning tunneling microscope studies to investigate the potential of performing quantum information processing using individual magnetic atoms. Earlier this week, IBM announced it will be building the world’s first commercial quantum computers for business and science.


IBM Research | IBM Research Created the World’s Smallest Magnet — an Atom


An ultra-low-power artificial synapse for neural-network computing

(Left) Illustration of a synapse in the brain connecting two neurons. (Right) Schematic of artificial synapse (ENODe), which functions as a transistor. It consists of two thin, flexible polymer films (black) with source, drain, and gate terminals, connected by an electrolyte of salty water that permits ions to cross. A voltage pulse applied to the “presynaptic” layer (top) alters the level of oxidation in the “postsynaptic layer” (bottom), triggering current flow between source and drain. (credit: Thomas Splettstoesser/CC and Yoeri van de Burgt et al./Nature Materials)

Stanford University and Sandia National Laboratories researchers have developed an organic artificial synapse based on a new memristor (resistive memory device) design that mimics the way synapses in the brain learn. The new artificial synapse could lead to computers that better recreate the way the human brain processes information. It could also one day directly interface with the human brain.

The new artificial synapse is an electrochemical neuromorphic organic device (dubbed “ENODe”) — a mixed ionic/electronic design that is fundamentally different from existing and other proposed resistive memory devices, which are limited by noise, required high write voltage, and other factors*, the researchers note in a paper published online Feb. 20 in Nature Materials.

Like a neural path in a brain being reinforced through learning, the artificial synapse is programmed by discharging and recharging it repeatedly. Through this training, the researchers have been able to predict within 1 percent of uncertainly what voltage will be required to get the synapse to a specific electrical state and, once there, remain at that state.

“The working mechanism of ENODes is reminiscent of that of natural synapses, where neurotransmitters diffuse through the cleft, inducing depolarization due to ion penetration in the postsynaptic neuron,” the researchers explain in the paper. “In contrast, other memristive devices switch by melting materials at relatively high temperatures (PCMs) or by voltage-induced breakdown/filament formation and ion diffusion in dense oxide layers (FFMOs).”

The ENODe achieves significant energy savings** in two ways:

  • Unlike a conventional computer, where you save your work to the hard drive before you turn it off, the artificial synapse can recall its programming without any additional actions or parts. Traditional computing requires separately processing information and then storing it into memory. Here, the processing creates the memory.
  • When we learn, electrical signals are sent between neurons in our brain. The most energy is needed the first time a synapse is traversed. Every time afterward, the connection requires less energy. This is how synapses efficiently facilitate both learning something new and remembering what we’ve learned. The artificial synapse, unlike most other versions of brain-like computing, also fulfills these two tasks simultaneously, and does so with substantial energy savings.

“More and more, the kinds of tasks that we expect our computing devices to do require computing that mimics the brain because using traditional computing to perform these tasks is becoming really power hungry,” said A. Alec Talin, distinguished member of technical staff at Sandia National Laboratories in Livermore, California, and co-senior author of the paper. “We’ve demonstrated a device that’s ideal for running these type of algorithms and that consumes a lot less power.”

A future brain-like computer with 500 states

Only one artificial synapse has been produced so far, but researchers at Sandia used 15,000 measurements to simulate how an array of them would work in a neural network. They tested the simulated network’s ability to recognize handwriting of digits 0 through 9. Tested on three datasets, the simulated array was able to identify the handwritten digits with an accuracy between 93 to 97 percent.

This artificial synapse may one day be part of a brain-like computer, which could be especially useful for processing visual and auditory signals, as in voice-controlled interfaces and driverless cars, but without energy-consuming computer hardware.

This device is also well suited for the kind of signal identification and classification that traditional computers struggle to perform. Whereas digital transistors can be in only two states, such as 0 and 1, the researchers successfully programmed 500 states in the artificial synapse, which is useful for neuron-type computation models. In switching from one state to another they used about one-tenth as much energy as a state-of-the-art computing system needs to move data from the processing unit to the memory.

However, this is still about 10,000 times as much energy as the minimum a biological synapse needs in order to fire**. The researchers hope to attain neuron-level energy efficiency once they test the artificial synapse in smaller devices.

Linking to live organic neurons

This new artificial synapse may one day be part of a brain-like computer, which could be especially beneficial for computing that works with visual and auditory signals. Examples of this are seen in voice-controlled interfaces and driverless cars. Past efforts in this field have produced high-performance neural networks supported by artificially intelligent algorithms but these depend on energy-consuming traditional computer hardware.

Every part of the device is made of inexpensive organic materials. These aren’t found in nature but they are largely composed of hydrogen and carbon and are compatible with the brain’s chemistry. Cells have been grown on these materials and they have even been used to make artificial pumps for neural transmitters. The switching voltages applied to train the artificial synapse (about 0.5 mV) are also the same as those that move through human neurons — about 1,000 times lower than the “write” voltage for a typical memristor.

That means it’s possible that the artificial synapse could communicate with live neurons, leading to improved brain-machine interfaces. The softness and flexibility of the device also lends itself to being used in biological environments.

This research was funded by the National Science Foundation, the Keck Faculty Scholar Funds, the Neurofab at Stanford, the Stanford Graduate Fellowship, Sandia’s Laboratory-Directed Research and Development Program, the U.S. Department of Energy, the Holland Scholarship, the University of Groningen Scholarship for Excellent Students, the Hendrik Muller National Fund, the Schuurman Schimmel-van Outeren Foundation, the Foundation of Renswoude (The Hague and Delft), the Marco Polo Fund, the Instituto Nacional de Ciência e Tecnologia/Instituto Nacional de Eletrônica Orgânica in Brazil, the Fundação de Amparo à Pesquisa do Estado de São Paulo and the Brazilian National Council.

* “A resistive memory device has not yet been demonstrated with adequate electrical characteristics to fully realize the efficiency and performance gains of a neural architecture. State-of-the-art memristors suffer from excessive write noise, write non-linearities, and high write voltages and currents.  Reducing the noise and lowering the switching voltage significantly below 0.3 V (~10 kT) in a two-terminal device without compromising long-term data retention has proven difficult.” … Organic memristive devices have been recently proposed, but are limited by “the slow kinetics of ion diffusion through a polymer to retain their states or on charge storage in metal nanoparticles, which inherently limits performance and stability.” — Yoeri van de Burgt et al., Nature Materials

** ENODe switches at low voltage and energy (< 10 pJ for 1000-square-micrometer devices), compared to an estimated ∼ 1–100 fJ per synaptic event for the human brain.
 

Abstract of A non-volatile organic electrochemical device as a low-voltage artificial synapse for neuromorphic computing

The brain is capable of massively parallel information processing while consuming only ~1–100 fJ per synaptic event. Inspired by the efficiency of the brain, CMOS-based neural architectures and memristors are being developed for pattern recognition and machine learning. However, the volatility, design complexity and high supply voltages for CMOS architectures, and the stochastic and energy-costly switching of memristors complicate the path to achieve the interconnectivity, information density, and energy efficiency of the brain using either approach. Here we describe an electrochemical neuromorphic organic device (ENODe) operating with a fundamentally different mechanism from existing memristors. ENODe switches at low voltage and energy (<10 pJ for 103 μm2 devices), displays >500 distinct, non-volatile conductance states within a ~1 V range, and achieves high classification accuracy when implemented in neural network simulations. Plastic ENODes are also fabricated on flexible substrates enabling the integration of neuromorphic functionality in stretchable electronic systems. Mechanical flexibility makes ENODes compatible with three-dimensional architectures, opening a path towards extreme interconnectivity comparable to the human brain.

Brain-computer interface advance allows paralyzed people to type almost as fast as some smartphone users

Typing with your mind. You are paralyzed. But now, tiny electrodes have been surgically implanted in your brain to record signals from your motor cortex, the brain region controlling muscle movement. As you think of mousing over to a letter (or clicking to choose it), those electrical brain signals are transmitted via a cable to a computer (replacing your spinal cord and muscles). There, advanced algorithms decode the complex electrical brain signals, converting them instantly into screen actions. (credit: Chethan Pandarinath et al./eLife)

Stanford University researchers have developed a brain-computer interface (BCI) system that can enable people with paralysis* to type (using an on-screen cursor) at speeds and accuracy levels of about three times faster than reported to date.

Simply by imagining their own hand movements, one participant was able to type 39 correct characters per minute (about eight words per minute); the other two participants averaged 6.3 and 2.7 words per minute, respectively — all without auto-complete assistance (so it could be much faster).

Those are communication rates that people with arm and hand paralysis would also find useful, the researchers suggest. “We’re approaching the speed at which you can type text on your cellphone,” said Krishna Shenoy, PhD, professor of electrical engineering, a co-senior author of the study, which was published in an open-access paper online Feb. 21 in eLife.

Braingate and beyond

The three study participants used a brain-computer interface called the “BrainGate Neural Interface System.” On KurzweilAI, we first discussed Braingate in 2011, followed by a 2012 clinical trial that allowed a paralyzed patient to control a robot.

Braingate in 2012 (credit: Brown University)

The new research, led by Stanford, takes the Braingate technology way further**. Participants can now move a cursor (by just thinking about a hand movement) on a computer screen that displays the letters of the alphabet, and they can “point and click” on letters, computer-mouse-style, to type letters and sentences.

The new BCI uses a tiny silicon chip, just over one-sixth of an inch square, with 100 electrodes that penetrate the brain to about the thickness of a quarter and tap into the electrical activity of individual nerve cells in the motor cortex.

As the participant thinks of a specific hand-to-mouse movement (pointing at or clicking on a letter), neural electrical activity is recorded using 96-channel silicon microelectrode arrays implanted in the hand area of the motor cortex. These signals are then filtered to extract multiunit spiking activity and high-frequency field potentials, then decoded (using two algorithms) to provide “point-and-click” control of a computer cursor.

What’s next

The team next plans is to adapt the system so that brain-computer interfaces can control commercial computers, phones and tablets — perhaps extending out to the internet.

Beyond that, Shenoy predicted that a self-calibrating, fully implanted wireless BCI system with no required caregiver assistance and no “cosmetic impact” would be available in five to 10 years from now (“closer to five”).

Perhaps a future wireless, noninvasive version could let anyone simply think to select letters, words, ideas, and images — replacing the mouse and finger touch — along the lines of Elon Musk’s neural lace concept?

* Millions of people with paralysis reside in the U.S.

** The study’s results are the culmination of the long-running multi-institutional BrainGate consortium, which includes scientists at Massachusetts General Hospital, Brown University, Case Western University, and the VA Rehabilitation Research and Development Center for Neurorestoration and Neurotechnology in Providence, Rhode Island. The study was funded by the National Institutes of Health, the Stanford Office of Postdoctoral Affairs, the Craig H. Neilsen Foundation, the Stanford Medical Scientist Training Program, Stanford BioX-NeuroVentures, the Stanford Institute for Neuro-Innovation and Translational Neuroscience, the Stanford Neuroscience Institute, Larry and Pamela Garlick, Samuel and Betsy Reeves, the Howard Hughes Medical Institute, the U.S. Department of Veterans Affairs, the MGH-Dean Institute for Integrated Research on Atrial Fibrillation and Stroke and Massachusetts General Hospital.


Stanford | Stanford researchers develop brain-controlled typing for people with paralysis


Abstract of High performance communication by people with paralysis using an intracortical brain-computer interface

Brain-computer interfaces (BCIs) have the potential to restore communication for people with tetraplegia and anarthria by translating neural activity into control signals for assistive communication devices. While previous pre-clinical and clinical studies have demonstrated promising proofs-of-concept (Serruya et al., 2002; Simeral et al., 2011; Bacher et al., 2015; Nuyujukian et al., 2015; Aflalo et al., 2015; Gilja et al., 2015; Jarosiewicz et al., 2015; Wolpaw et al., 1998; Hwang et al., 2012; Spüler et al., 2012; Leuthardt et al., 2004; Taylor et al., 2002; Schalk et al., 2008; Moran, 2010; Brunner et al., 2011; Wang et al., 2013; Townsend and Platsko, 2016; Vansteensel et al., 2016; Nuyujukian et al., 2016; Carmena et al., 2003; Musallam et al., 2004; Santhanam et al., 2006; Hochberg et al., 2006; Ganguly et al., 2011; O’Doherty et al., 2011; Gilja et al., 2012), the performance of human clinical BCI systems is not yet high enough to support widespread adoption by people with physical limitations of speech. Here we report a high-performance intracortical BCI (iBCI) for communication, which was tested by three clinical trial participants with paralysis. The system leveraged advances in decoder design developed in prior pre-clinical and clinical studies (Gilja et al., 2015; Kao et al., 2016; Gilja et al., 2012). For all three participants, performance exceeded previous iBCIs (Bacher et al., 2015; Jarosiewicz et al., 2015) as measured by typing rate (by a factor of 1.4–4.2) and information throughput (by a factor of 2.2–4.0). This high level of performance demonstrates the potential utility of iBCIs as powerful assistive communication devices for people with limited motor function.

Manipulating silicon atoms to create future ultra-fast, ultra-low-power chip technology

Model showing interactions between atomic-force microscope tip (top) and silicon surface (hydrogen: white; silicon: tan and red), using a new technique for coating the tip with hydrogen — part of a study to create future electronic circuits at the atomic level. (credit: Wolkow Lab)

Imagine a hybrid silicon-molecular computer that uses one thousand times less energy or a cell phone battery that lasts weeks at a time.

University of Alberta scientists, headed by University of Alberta physics professor Robert Wolkow, have taken a major step in that direction by visualizing and geometrically patterning silicon at the atomic level — using an innovative  atomic-force microscopy* (AFM) technique. The goal: chip technology that performs dramatically better than today’s CMOS architecture.

(Left) Ball-and-stick theoretical model of the pentacene molecule. (Right) AFM image of pentacene molecule showing the pattern of the bonds in the model. The five hexagonal carbon rings are resolved clearly and even the carbon-hydrogen bonds (white in the model) are imaged. Scale bar: 5 angstroms (0.5 nanometer) (credit: IBM Zurich)

Visualizing bonds in atoms at atomic resolution was first achieved by IBM Zurich scientists in 2009, when they imaged the pentacene molecule on copper. But imaging silicon is a problem: the sharp tip damages the fragile silicon molecules, the researchers note in an open-access paper published in the February 13, 2017 issue of Nature Communications.

To avoid damaging the silicon surface, the researchers created the first hydrogen-covered AFM tip, making it possible to manipulate silicon atoms. It was “a bit like Goldilocks,” PhD student and co-author Taleana Huff explained to KurzweilAI. “There is a sweet-spot region where you are probing the surface without interacting with it. Getting close enough to the surface with just the right parameters allows you to see these bonds materialize.

Bob Wolkow and Taleana Huff patterning and imaging electronic circuits at the atomic level (credit: Wolkow Lab)

“If you get too close though, you end up transferring atoms to the surface or, conversely, to the tip, ruining the experiment. A lot of tech and knowledge goes into getting all these settings just right, including a powerful new computational approach that analyzes and verifies the identity of the atoms and bonds.”

Hydrogen-terminated silicon for ultra-fast, ultra-low-power technology

“We see hydrogen-terminated silicon as the platform for a whole new paradigm of efficient and fast silicon-based electronics,” Huff said. “Now that we understand the surface intimately and have these powerful tools and the experience, the next step is to start using the AFM to look at computational elements made using quantum dots [nanoscale semiconductor particles], which we create by removing hydrogen atoms from the silicon surface. When we cleverly pattern them geometrically, these atomic silicon quantum dots can be used to make very fast and incredibly low-power computational patterns.”

The long-term goal is making ultra-fast and ultra-low-power silicon-based circuits that potentially consume one thousand times less power than what is currently on the market, according to the researchers, along with novel quantum applications.

* Typical atomic force microscope (AFM) setup

To image a surface, an AFM sharp tip scans across the sample to detect irregularities in the surface, which cause deflection of the tip and the connected cantilever and generating a topological map of the sample surface. The deflection is measured by reflecting a laser beam off the backside of the cantilever. (credit: CC/Opensource Handbook of Nanoscience and Nanotechnology)


Wolkow Lab | An animation illustrating patterning and imagining electronic circuits at the atomic level. It shows the tip and surface atoms’ relaxation during calculations of a part of the image simulation at small tip-surface distance. The bending and rotation of bonds is visible, giving a sense of the interactions and atomic relaxations involved.


UAlbertaScience | Less is more for atomic-scale manufacturing

This animation represents an electrical current being switched on and off. Remarkably, the current is confined to a channel that is just one atom wide. Also, the switch is made of just one atom. When the atom in the center feels an electric field tugging at it, it loses its electron. Once that electron is lost, the many electrons in the body of the silicon (to the left) have a clear passage to flow through. When the electric field is removed, an electron gets trapped in the central atom, switching the current off. This represents the latest work out of Robert Wolkow’s lab at the University of Alberta.


Abstract of Indications of chemical bond contrast in AFM images of a hydrogen-terminated silicon surface

The origin of bond-resolved atomic force microscope images remains controversial. Moreover, most work to date has involved planar, conjugated hydrocarbon molecules on a metal substrate thereby limiting knowledge of the generality of findings made about the imaging mechanism. Here we report the study of a very different sample; a hydrogen-terminated silicon surface. A procedure to obtain a passivated hydrogen-functionalized tip is defined and evolution of atomic force microscopy images at different tip elevations are shown. At relatively large tip-sample distances, the topmost atoms appear as distinct protrusions. However, on decreasing the tip-sample distance, features consistent with the silicon covalent bonds of the surface emerge. Using a density functional tight-binding-based method to simulate atomic force microscopy images, we reproduce the experimental results. The role of the tip flexibility and the nature of bonds and false bond-like features are discussed.

Carnegie Mellon AI beats top poker pros — a first

“Brains vs Artificial Intelligence” competition at the Rivers Casino in Pittsburgh (credit: Carnegie Mellon University)

Libratus, an AI developed by Carnegie Mellon University, has defeated four of the world’s best professional poker players in a marathon 120,000 hands of Heads-up, No-Limit Texas Hold’em poker played over 20 days, CMU announced today (Jan. 31) — joining Deep Blue (for chess), Watson, and Alpha Go as major milestones in AI.

Libratus led the pros by a collective $1,766,250 in chips.* The tournament was held at the Rivers Casino in Pittsburgh from 11–30 January in a competition called “Brains Vs. Artificial Intelligence: Upping the Ante.”

The developers of Libratus — Tuomas Sandholm, professor of computer science, and Noam Brown, a Ph.D. student in computer science — said the sizable victory is statistically significant and not simply a matter of luck. “The best AI’s ability to do strategic reasoning with imperfect information has now surpassed that of the best humans,” Sandholm said. “This is the last frontier, at least in the foreseeable horizon, in game-solving in AI.”

This new AI milestone has implications for any realm in which information is incomplete and opponents sow misinformation, said Frank Pfenning, head of the Computer Science Department in CMU’s School of Computer Science. Business negotiation, military strategy, cybersecurity, and medical treatment planning could all benefit from automated decision-making using a Libratus-like AI.

“The computer can’t win at poker if it can’t bluff,” Pfenning explained. “Developing an AI that can do that successfully is a tremendous step forward scientifically and has numerous applications. Imagine that your smartphone will someday be able to negotiate the best price on a new car for you. That’s just the beginning.”

How the pros taught Libratus about its weaknesses

Brains vs AI scorecard (credit: Carnegie Mellon University)

So how was Libratus was able to improve day to day during the competition? It turns out it was the pros themselves who taught Libratus about its weaknesses. “After play ended each day, a meta-algorithm analyzed what holes the pros had identified and exploited in Libratus’ strategy,” Sandholm explained. “It then prioritized the holes and algorithmically patched the top three using the supercomputer each night.

“This is very different than how learning has been used in the past in poker. Typically researchers develop algorithms that try to exploit the opponent’s weaknesses. In contrast, here the daily improvement is about algorithmically fixing holes in our own strategy.”

Sandholm also said that Libratus’ end-game strategy was a major advance. “The end-game solver has a perfect analysis of the cards,” he said. It was able to update its strategy for each hand in a way that ensured any late changes would only improve the strategy. Over the course of the competition, the pros responded by making more aggressive moves early in the hand, no doubt to avoid playing in the deep waters of the endgame where the AI had an advantage, he added.

Converging high-performance computing and AI

Professor Tuomas Sandholm, Carnegie Mellon School of Computer Science, with the Pittsburgh Supercomputing Center’s Bridges supercomputer (credit: Carnegie Mellon University)

Libratus’ victory was made possible by the Pittsburgh Supercomputing Center’s Bridges computer. Libratus recruited the raw power of approximately 600 of Bridges’ 846 compute nodes. Bridges’ total speed is 1.35 petaflops, about 7,250 times as fast as a high-end laptop, and its memory is 274 terabytes, about 17,500 as much as you’d get in that laptop. This computing power gave Libratus the ability to play four of the best Texas Hold’em players in the world at once and beat them.

“We designed Bridges to converge high-performance computing and artificial intelligence,” said Nick Nystrom, PSC’s senior director of research and principal investigator for the National Science Foundation-funded Bridges system. “Libratus’ win is an important milestone toward developing AIs to address complex, real-world problems. At the same time, Bridges is powering new discoveries in the physical sciences, biology, social science, business and even the humanities.”

Sandholm said he will continue his research push on the core technologies involved in solving imperfect-information games and in applying these technologies to real-world problems. That includes his work with Optimized Markets, a company he founded to automate negotiations.

“CMU played a pivotal role in developing both computer chess, which eventually beat the human world champion, and Watson, the AI that beat top human Jeopardy! competitors,” Pfenning said. “It has been very exciting to watch the progress of poker-playing programs that have finally surpassed the best human players. Each one of these accomplishments represents a major milestone in our understanding of intelligence.

Head’s-Up No-Limit Texas Hold’em is a complex game, with 10160 (the number 1 followed by 160 zeroes) information sets — each set being characterized by the path of play in the hand as perceived by the player whose turn it is. The AI must make decisions without knowing all of the cards in play, while trying to sniff out bluffing by its opponent. As “no-limit” suggests, players may bet or raise any amount up to all of their chips.

Sandholm will be sharing Libratus’ secrets now that the competition is over, beginning with invited talks at the Association for the Advancement of Artificial Intelligence meeting Feb. 4–9 in San Francisco and in submissions to peer-reviewed scientific conferences and journals.

* The pros — Dong Kim, Jimmy Chou, Daniel McAulay and Jason Les — will split a $200,000 prize purse based on their respective performances during the event. McAulay, of Scotland, said Libratus was a tougher opponent than he expected, but it was exciting to play against it. “Whenever you play a top player at poker, you learn from it,” he said.


Carnegie Mellon University | Brains Vs. AI Rematch: Why Poker?

This simple optoelectronic computer could one day outperform supercomputers for complex problems

Stanford post-doctoral scholar Peter McMahon, left, and visiting researcher Alireza Marandi examine a prototype of a new type of light-based computer. (credit: L.A. Cicero)

Stanford researchers have designed a new type of computer that combines optical and electronic technology to solve combinatorial optimization problems, which are challenging for traditional computers, even for supercomputers.

An optimal traveling salesman route through some U.S. capital cities (credit: SAS)

An example is  the “traveling salesman” problem, in which a salesman has to visit a specific set of cities, each only once, and return to the first city, taking the most efficient route possible. The number of possible routes increases extremely rapidly as cities are added, and this underlies why the problem is difficult to solve.

Other examples of such problems include finding the optimal path for delivery trucks, minimizing interference in wireless networks, and determining how proteins fold. Even small improvements in some of these areas could result in massive monetary savings.

Quantum computers are also being explored to solve such problems, but “providing dense connectivity between qubits remains a major challenge,” the authors note in a paper published Oct. 20 in the journal Science.

The Stanford team has built an “Ising machine,” named for a mathematical model of magnetism. But instead of using magnetism, the team built an entirely new type of computer that blends optical and electrical processing.*

Experimental schematic of a measurement-feedback-based coherent Ising machine.
A time-division-multiplexed pulsed degenerate optical parametric oscillator is formed by a nonlinear crystal (PPLN) in a fiber ring cavity containing 160 pulses. A fraction of each pulse is measured and used to compute a feedback signal that effectively
couples the otherwise-independent pulses in the cavity. IM: intensity modulator; PM:
phase modulator; LO: local oscillator; SHG: second-harmonic generation; FPGA: field-programmable gate array. (credit: Peter L. McMahon et al./Science)

The team used a special kind of laser system, known as a “degenerate optical parametric oscillator.” When turned on, it will represent an upward- or downward-pointing “spin” in the classic Ising machine. Pulses of the laser represent a city’s position in a path the salesman could take.

Scaling up

If it can be scaled up, this non-traditional computer could save costs by finding more optimal solutions to problems that have an incredibly high number of possible solutions, the researchers suggest.

Nearly all of the materials used to make this machine are off-the-shelf elements that are already used for telecommunications. That, in combination with the simplicity of the programming, makes it easy to scale up, the researchers say. Stanford’s machine is currently able to solve 100-variable problems with any arbitrary set of connections between variables, and it has been tested on thousands of scenarios.

A group at NTT in Japan that consulted with Stanford’s team has also created an independent version of the machine; its study has been published alongside Stanford’s by Science.

For now, the Ising machine-based system still falls short of beating the processing power of traditional digital computers when it comes to combinatorial optimization.

Researchers from the National Institute of Informatics (Japan), University of Tokyo, NTT Basic Research Laboratories, and the ImPACT Program were also co-authors. This research was funded by the Impulsing Paradigm Change through Disruptive Technologies (ImPACT) Program of the Council of Science, Technology and Innovation (Cabinet Office, Government of Japan).

* A theoretical Ising machine acts like a reprogrammable network of artificial magnets, where each magnet only points up or down and, like a real magnetic system, it is expected to tend toward operating at low energy.

The theory is that, if the connections among a network of magnets can be programmed to represent the problem at hand, once they settle on the optimal, low-energy directions they should face, the solution can be derived from their final state. In the case of the traveling salesman, each artificial magnet in the Ising machine represents the position of a city in a particular path.

In an earlier version of this machine (published two years ago), the team members extracted a small portion of each pulse, delayed it and added a controlled amount of that portion to the subsequent pulses. In traveling salesman terms, this is how they program the machine with the connections and distances between the cities. The pulse-to-pulse couplings constitute the programming of the problem. Then the machine is turned on to try to find a solution, which can be obtained by measuring the final output phases of the pulses.

The problem in this previous approach was connecting large numbers of pulses in arbitrarily complex ways. It was doable but required an added controllable optical delay for each pulse, which was costly and difficult to implement.

The new Stanford Ising machine shows that a more affordable and practical version could be made by replacing the controllable optical delays with a digital electronic circuit, which emulates the optical connections among the pulses to program the problem.


Abstract of A fully-programmable 100-spin coherent Ising machine with all-to-all connections

Unconventional, special-purpose machines may aid in accelerating the solution of some of the hardest problems in computing, such as large-scale combinatorial optimizations, by exploiting different operating mechanisms than standard digital computers. We present a scalable optical processor with electronic feedback that can be realized at large scale with room-temperature technology. Our prototype machine is able to find exact solutions of, or to sample good approximate solutions to, a variety of hard instances of Ising problems with up to 100 spins and 10,000 spin-spin connections.