Join the around-the-world 24-hour conversation on the future to celebrate World Future Day March 1

Futurists from the 55 Millennium Project nodes worldwide will join other organizations and the public on March 1 to exchange ideas about the future

Futurists worldwide plan to celebrate March 1 as World Future Day with a 24-hour conversation about the world’s potential futures, challenges, and opportunities.

At 12 noon your local time on March 1, you can click on a Google hangout at goo.gl/4hCJq3 and join the conversation* (log in with a Google account).  It starts at 12 noon (midnight in New York) in Auckland, New Zealand and moves across the world, ending in Honolulu at 12 noon Honolulu time.

The World Futures Studies Federation, Association of Professional Futurists, and Humanity+ have joined forces with The Millennium Project** to invite their members and the public to participate.

“This is an open discussion about the future,“ says Jerome Glenn, CEO of The Millennium Project. “People will be encouraged to share their ideas about how to build a better future.”

This is the fourth year The Millennium Project has done this. Previous World Future Days have discussed issues like:

  • Has the world become too complex to understand and manage?
  • Can collective intelligence and smart cities anticipate and manage such complexity?
  • Will there be a phase shift of global attitudes in the near future about what is important about the future?
  • Can new concepts of employment be created to prevent increasing unemployment caused by the acceleration of technological changes?
  • Can self-organization on the Internet reduce dependence on ill-informed politicians?
  • Can virtual currencies work without supporting organized crime?
  • How can we break free from mental constraints preventing truly innovative valuable ideas and understand how our brains might sabotage us (rational vs. irrational fear, traumatic memories, and defense mechanisms)?
  • How can we connect our brains to become more intelligent?

* If you join the video conference and see that the limit of interactive video participation has been reached, you will still be able to see and hear, as well as type in the chat box, but your video will not be seen until some leave the conversation. As people drop out, new video slots will open up. You can also tweet a comment to @millenniumproj and facilitators will read it live in the video conference.

** The Millennium Project is an independent non-profit global participatory futures research think tank of futurists, scholars, business planners, and policy makers who work for international organizations, governments, corporations, non-governmental organizations, and universities. It produces the annual “State of the Future” reports, the “Futures Research Methodology” series, the Global Futures Intelligence System (GFIS), and special studies. 

Brain-imaging headband measures how our minds mirror a speaker when we communicate

A cartoon image of brain “coupling” during communication (credit: Drexel University)

Drexel University biomedical engineers and Princeton University psychologists have used a wearable brain-imaging device called functional near-infrared spectroscopy (fNIRS) to measure brain synchronization when humans interact. fNIRS uses light to measure neural activity in the cortex of the brain (based on blood-oxygenation changes) during real-life situations and can be worn like a headband.

(KurzweilAI recently covered research with a fNIRS brain-computer interface that allows completely locked-in patients to communicate.)

A fNIRS headband (credit: Wyss Center for Bio and Neuroengineering)

Mirroring the speaker’s brain activity

The researchers found that a listener’s brain activity (in brain areas associated with speech comprehension) mirrors the speaker’s brain when he or she is telling a story about a real-life experience, with about a five-second delay. They also found that higher coupling is associated with better understanding.

The researchers believe the system can be used to offer important information about how to better communicate in many different environments, such as how people learn in classrooms and how to improve business meetings and doctor-patient communication. They also mentioned uses in analyzing political rallies and how people handle cable news.

“We now have a tool that can give us richer information about the brain during everyday tasks — such as person-to-person communication — that we could not receive in artificial lab settings or from single brain studies,” said Hasan Ayaz, PhD, an associate research professor in Drexel’s School of Biomedical Engineering, Science and Health Systems, who led the research team.

Traditional brain imaging methods like fMRI have limitations. In particular, fMRI requires subjects to lie down motionlessly in a noisy scanning environment. With this kind of setup, it’s not possible to simultaneously scan the brains of multiple individuals who are speaking face-to-face. Which is why the Drexel researchers turned to a portable fNIRS system, which could probe brain-to-brain coupling question in natural settings.

For their study, a native English speaker and two native Turkish speakers told an unrehearsed, real-life story in their native language. Their stories were recorded and their brains were scanned using fNIRS. Fifteen English speakers then listened to the recording, in addition to a story that was recorded at a live storytelling event.

The researchers targeted the prefrontal and parietal areas of the brain, which include cognitive and higher order areas that are involved in a person’s capacity to discern beliefs, desires, and goals of others. They hypothesized that a listener’s brain activity would correlate with the speaker’s only when listening to a story they understood (the English version). A second objective of the study was to compare the fNIRS results with data from a similar study that had used fMRI to compare the two methods.

They found that when the fNIRS measured the oxygenation and deoxygenation of blood cells in the test subject’s brains, the listeners’ brain activity matched only with the English speakers.* These results also correlated with the previous fMRI study.

The researchers believe the new research supports fNIRS as a viable future tool to study brain-to-brain coupling during social interaction. One can also imagine possible invasive uses in areas such as law enforcement and military interrogation.

The research was published in open-access Scientific Reports on Monday, Feb. 27.

* “During brain-to-brain coupling, activity in areas of prefrontal [in the speaker] and parietal cortex [in the listeners] previously reported to be involved in sentence comprehension were robustly correlated across subjects, as revealed in the inter-subject correlation analysis. As these are task-related (active listening) activation periods (not resting, etc.), the correlations reflect modulation of these regions by the time-varying content of the narratives, and comprise linguistic, conceptual and affective processing.” — Yichuan Liu et al./Scientific Reports)


Abstract of Measuring speaker–listener neural coupling with functional near infrared spectroscopy

The present study investigates brain-to-brain coupling, defined as inter-subject correlations in the hemodynamic response, during natural verbal communication. We used functional near-infrared spectroscopy (fNIRS) to record brain activity of 3 speakers telling stories and 15 listeners comprehending audio recordings of these stories. Listeners’ brain activity was significantly correlated with speakers’ with a delay. This between-brain correlation disappeared when verbal communication failed. We further compared the fNIRS and functional Magnetic Resonance Imaging (fMRI) recordings of listeners comprehending the same story and found a significant relationship between the fNIRS oxygenated-hemoglobin concentration changes and the fMRI BOLD in brain areas associated with speech comprehension. This correlation between fNIRS and fMRI was only present when data from the same story were compared between the two modalities and vanished when data from different stories were compared; this cross-modality consistency further highlights the reliability of the spatiotemporal brain activation pattern as a measure of story comprehension. Our findings suggest that fNIRS can be used for investigating brain-to-brain coupling during verbal communication in natural settings.

Billionaire Softbank CEO Masayoshi Son plans to invest in singularity

Masayoshi Son (credit: Softbank Group)

Billionaire Softbank Group Chairman and CEO Masayoshi Son revealed Monday (Feb. 27) at Mobile World Congress his plan to invest in singularity. “In next 30 years [the singularity] will become a reality,” he said, Tech Crunch reports.

“If superintelligence goes inside the moving device then the world, our lifestyle dramatically changes,” he said. “There will be many kinds. Flying, swimming, big, micro, run, 2 legs, 4 legs, 100 legs,” referring to robots. “I truly believe it’s coming, that’s why I’m in a hurry — to aggregate the cash, to invest.”

“Son said his personal conviction in the looming rise of billions of superintelligent robots both explains his acquisition of UK chipmaker ARM last year, and his subsequent plan to establish the world’s biggest VC fund,” noted TechCrunch — a new $100BN fund called the Softbank Vision Fund, announced last October.

TechCrunch said that despite additional contributors including Foxconn, Apple, Qualcomm and Oracle co-founder Larry Ellison’s family office, the fund has evidently not yet hit Son’s target of $100BN, so he used the keynote as a sales pitch for additional partners.

Addressing existential threats

“Son said his haste is partly down to a belief that superintelligent AIs can be used for ‘the goodness of humanity,’ going on to suggest that only AI has the potential to address some of the greatest threats to humankind’s continued existence — be it climate change or nuclear annihilation,” said Tech Crunch.

“It will be so much more capable than us — what will be our job? What will be our life? We have to ask philosophical questions,” Son said. “Is it good or bad? “I think this superintelligence is going to be our partner. If we misuse it it’s a risk. If we use it in good spirits it will be our partner for a better life. So the future can be better predicted, people will live healthier, and so on.”

“With the coming of singularity, I believe we will benefit from new ideas and wisdom that people were previously incapable of thanks to big data and other analytics,” Son said on the Softbank Group website. “At some point I am sure we will see the birth of a ‘Super-intelligence’ that will contribute to humanity. This paradigm shift has only accelerated in recent years as both a worldwide and irreversible trend.”

Neural networks promise sharpest-ever telescope images

From left to right: an example of an original galaxy image; the same image deliberately degraded; the image after recovery by the neural network; and for comparison, deconvolution. This figure visually illustrates the neural-networks’s ability to recover features that conventional deconvolutions cannot. (credit: K. Schawinski / C. Zhang / ETH Zurich)

Swiss researchers are using neural networks to achieve the sharpest-ever images in optical astronomy. The work appears in an open-access paper in Monthly Notices of the Royal Astronomical Society.

The aperture (diameter) of any telescope is fundamentally limited by its lens or mirror. The bigger the mirror or lens, the more light it gathers, allowing astronomers to detect fainter objects, and to observe them more clearly. Other factors affecting image quality are noise and atmospheric distortion.

The Swiss study uses “generative adversarial network” (GAN) machine-learning technology (see this KurzweilAI article) to go beyond this limit by using two neural networks that compete with each other to create a series of more realistic images. The researchers first train the neural network to “see” what galaxies look like (using blurred and sharp images of the same galaxy), and then ask it to automatically fix the blurred images of a galaxy, converting them to sharp ones.

Schematic illustration of the neural-network training process. The input is a set of original images. From these, the researchers automatically generate degraded images, and train a GAN. In the testing phase, only the generator will be used to recover images. (credit: K. Schawinski / C. Zhang / ETH Zurich)

The trained neural networks were able to recognize and reconstruct features that the telescope could not resolve, such as star-forming regions and dust lanes in galaxies. The scientists checked the reconstructed images against the original high-resolution images to test its performance, finding it better able to recover features than anything used to date.

“We can start by going back to sky surveys made with telescopes over many years, see more detail than ever before, and, for example, learn more about the structure of galaxies,” said lead author Prof. Kevin Schawinski of ETH Zurich in Switzerland. “There is no reason why we can’t then apply this technique to the deepest images from Hubble, and the coming James Webb Space Telescope, to learn more about the earliest structures in the Universe.”

ETH Zurich is hosting this work on the space.ml cross-disciplinary astrophysics/computer-science initiative, where the code is available to the general public.


Abstract of Generative adversarial networks recover features in astrophysical images of galaxies beyond the deconvolution limit

Observations of astrophysical objects such as galaxies are limited by various sources of random and systematic noise from the sky background, the optical system of the telescope and the detector used to record the data. Conventional deconvolution techniques are limited in their ability to recover features in imaging data by the Shannon–Nyquist sampling theorem. Here, we train a generative adversarial network (GAN) on a sample of 4550 images of nearby galaxies at 0.01 < z < 0.02 from the Sloan Digital Sky Survey and conduct 10× cross-validation to evaluate the results. We present a method using a GAN trained on galaxy images that can recover features from artificially degraded images with worse seeing and higher noise than the original with a performance that far exceeds simple deconvolution. The ability to better recover detailed features such as galaxy morphology from low signal to noise and low angular resolution imaging data significantly increases our ability to study existing data sets of astrophysical objects as well as future observations with observatories such as the Large Synoptic Sky Telescope (LSST) and the Hubble and James Webb space telescopes.

Why you should eat 10 portions of fruit or vegetables a day

image credit | iStock

Eating 800 grams a day (about ten portions*) of fruit or vegetables could reduce your chance of heart attack, stroke, cancer, and early death, scientists from Imperial College London conclude from a meta-analysis of 95 studies on fruit and vegetable intake.

The study, published in an open-access paper in the International Journal of Epidemiology, included 2 million people worldwide and assessed up to 43,000 cases of heart disease, 47,000 cases of stroke, 81,000 cases of cardiovascular disease, 112,000 cancer cases and 94,000 deaths.

About 7.8 million premature deaths worldwide could be potentially prevented yearly if people followed this protocol, the researchers say.

Compared to not eating any fruits and vegetables, a daily intake of 200 grams (two and a half portions) was associated with a 16% reduced risk of heart disease, an 18% reduced risk of stroke, a 13% reduced risk of cardiovascular disease, a 4% reduced risk in cancer risk, and a 15% reduction in the risk of premature death.

However, a higher intake of fruits and vegetables of 800 grams a day was associated with 24% reduced risk of heart disease, a 33% reduced risk of stroke, a 28% reduced risk of cardiovascular disease, a 13% reduced risk of total cancer** and a 31% reduction in dying prematurely.***

The current UK guidelines suggest you eat at least five portions or 400 grams per day, but fewer than one in three UK adults are thought to even meet this target. The U.S. Health and Human Services/USDA guidelines use a different metric: “The recommended amount of vegetables in the Healthy U.S.-Style Eating Pattern at the 2,000-calorie level is 2½ cup-equivalents of vegetables per day and 2 cup-equivalents of fruit per day.


Foods that are best at disease prevention, according to the study

To prevent heart disease, stroke, cardiovascular disease, and early death: apples, pears, citrus fruits, salads, and green leafy vegetables such as spinach, lettuce and chicory, and cruciferous vegetables such as broccoli, cabbage and cauliflower.

To reduce cancer risk: green vegetables, such as spinach or green beans, yellow vegetables, such as peppers and carrots, and cruciferous vegetables.


Reasons for health benefits

So why do fruit and vegetables have such profound health benefits? According to Dagfinn Aune, PhD, lead author of the research, from the School of Public Health at Imperial: “Fruit and vegetables have been shown to reduce cholesterol levels, blood pressure, and to boost the health of our blood vessels and immune system. This may be due to the complex network of nutrients they hold. For instance they contain many antioxidants, which may reduce DNA damage, and lead to a reduction in cancer risk.”

He also noted that compounds called glucosinolates in cruciferous vegetables, such as broccoli, activate enzymes that may help prevent cancer. And fruit and vegetables may also have a beneficial effect on the naturally-occurring bacteria in our gut.

image credit | iStock

Most beneficial compounds can’t be easily replicated in a pill, he said: “Most likely it is the whole package of beneficial nutrients you obtain by eating fruits and vegetables that is crucial is health.

“This is why it is important to eat whole plant foods to get the benefit, instead of taking antioxidant or vitamin supplements, which have not been shown to reduce disease risk.”

In the paper, the researchers qualify these statements, noting that they assume the observed associations are causal (there could be other causes of improved health). The team, however, took into account some other factors, such as a person’s weight, smoking, physical activity levels, and overall diet.

“We need further research into the effects of specific types of fruits and vegetables and preparation methods of fruit and vegetables,” Aune suggested. “We also need more research on the relationship between fruit and vegetable intake with causes of death other than cancer and cardiovascular disease. However, it is clear from this work that a high intake of fruit and vegetables hold tremendous health benefits, and we should try to increase their intake in our diet.”

This project was funded by Olav og Gerd Meidel Raagholt’s Stiftelse for Medisinsk Forskning, the Liaison Committee between the Central Norway Regional Health Authority (RHA) and the Norwegian University of Science and Technology (NTNU), and the Imperial College National Institute of Health Research (NIHR) Biomedical Research Centre (BRC).

* A portion (80 grams) of fruit equals approximately one small banana, apple, pear or large mandarin; three heaped tablespoons of cooked vegetables such as spinach, peas, broccoli or cauliflower count as one portion.

** For cancer, no further reductions in risk were observed above 600 grams per day.

*** The team was not able to investigate intakes greater than 800 g a day. The team also did not find significant differences between raw and cooked vegetables in relation to early death, and they noted that that other specific fruits and vegetables as well as preparation methods may also play a role.


image credit | iStock

Abstract of Fruit and vegetable intake and the risk of cardiovascular disease, total cancer and all-cause mortality–a systematic review and dose-response meta-analysis of prospective studies

Background: Questions remain about the strength and shape of the dose-response relationship between fruit and vegetable intake and risk of cardiovascular disease, cancer and mortality, and the effects of specific types of fruit and vegetables. We conducted a systematic review and meta-analysis to clarify these associations.

Methods: PubMed and Embase were searched up to 29 September 2016. Prospective studies of fruit and vegetable intake and cardiovascular disease, total cancer and all-cause mortality were included. Summary relative risks (RRs) were calculated using a random effects model, and the mortality burden globally was estimated; 95 studies (142 publications) were included.

Results: For fruits and vegetables combined, the summary RR per 200 g/day was 0.92 [95% confidence interval (CI): 0.90–0.94, I2 = 0%, n = 15] for coronary heart disease, 0.84 (95% CI: 0.76–0.92, I2 = 73%, n = 10) for stroke, 0.92 (95% CI: 0.90–0.95, I2 = 31%, n = 13) for cardiovascular disease, 0.97 (95% CI: 0.95–0.99, I2 = 49%, n = 12) for total cancer and 0.90 (95% CI: 0.87–0.93, I2 = 83%, n = 15) for all-cause mortality. Similar associations were observed for fruits and vegetables separately. Reductions in risk were observed up to 800 g/day for all outcomes except cancer (600 g/day). Inverse associations were observed between the intake of apples and pears, citrus fruits, green leafy vegetables, cruciferous vegetables, and salads and cardiovascular disease and all-cause mortality, and between the intake of green-yellow vegetables and cruciferous vegetables and total cancer risk. An estimated 5.6 and 7.8 million premature deaths worldwide in 2013 may be attributable to a fruit and vegetable intake below 500 and 800 g/day, respectively, if the observed associations are causal.

Conclusions: Fruit and vegetable intakes were associated with reduced risk of cardiovascular disease, cancer and all-cause mortality. These results support public health recommendations to increase fruit and vegetable intake for the prevention of cardiovascular disease, cancer, and premature mortality.

An ultra-low-power artificial synapse for neural-network computing

(Left) Illustration of a synapse in the brain connecting two neurons. (Right) Schematic of artificial synapse (ENODe), which functions as a transistor. It consists of two thin, flexible polymer films (black) with source, drain, and gate terminals, connected by an electrolyte of salty water that permits ions to cross. A voltage pulse applied to the “presynaptic” layer (top) alters the level of oxidation in the “postsynaptic layer” (bottom), triggering current flow between source and drain. (credit: Thomas Splettstoesser/CC and Yoeri van de Burgt et al./Nature Materials)

Stanford University and Sandia National Laboratories researchers have developed an organic artificial synapse based on a new memristor (resistive memory device) design that mimics the way synapses in the brain learn. The new artificial synapse could lead to computers that better recreate the way the human brain processes information. It could also one day directly interface with the human brain.

The new artificial synapse is an electrochemical neuromorphic organic device (dubbed “ENODe”) — a mixed ionic/electronic design that is fundamentally different from existing and other proposed resistive memory devices, which are limited by noise, required high write voltage, and other factors*, the researchers note in a paper published online Feb. 20 in Nature Materials.

Like a neural path in a brain being reinforced through learning, the artificial synapse is programmed by discharging and recharging it repeatedly. Through this training, the researchers have been able to predict within 1 percent of uncertainly what voltage will be required to get the synapse to a specific electrical state and, once there, remain at that state.

“The working mechanism of ENODes is reminiscent of that of natural synapses, where neurotransmitters diffuse through the cleft, inducing depolarization due to ion penetration in the postsynaptic neuron,” the researchers explain in the paper. “In contrast, other memristive devices switch by melting materials at relatively high temperatures (PCMs) or by voltage-induced breakdown/filament formation and ion diffusion in dense oxide layers (FFMOs).”

The ENODe achieves significant energy savings** in two ways:

  • Unlike a conventional computer, where you save your work to the hard drive before you turn it off, the artificial synapse can recall its programming without any additional actions or parts. Traditional computing requires separately processing information and then storing it into memory. Here, the processing creates the memory.
  • When we learn, electrical signals are sent between neurons in our brain. The most energy is needed the first time a synapse is traversed. Every time afterward, the connection requires less energy. This is how synapses efficiently facilitate both learning something new and remembering what we’ve learned. The artificial synapse, unlike most other versions of brain-like computing, also fulfills these two tasks simultaneously, and does so with substantial energy savings.

“More and more, the kinds of tasks that we expect our computing devices to do require computing that mimics the brain because using traditional computing to perform these tasks is becoming really power hungry,” said A. Alec Talin, distinguished member of technical staff at Sandia National Laboratories in Livermore, California, and co-senior author of the paper. “We’ve demonstrated a device that’s ideal for running these type of algorithms and that consumes a lot less power.”

A future brain-like computer with 500 states

Only one artificial synapse has been produced so far, but researchers at Sandia used 15,000 measurements to simulate how an array of them would work in a neural network. They tested the simulated network’s ability to recognize handwriting of digits 0 through 9. Tested on three datasets, the simulated array was able to identify the handwritten digits with an accuracy between 93 to 97 percent.

This artificial synapse may one day be part of a brain-like computer, which could be especially useful for processing visual and auditory signals, as in voice-controlled interfaces and driverless cars, but without energy-consuming computer hardware.

This device is also well suited for the kind of signal identification and classification that traditional computers struggle to perform. Whereas digital transistors can be in only two states, such as 0 and 1, the researchers successfully programmed 500 states in the artificial synapse, which is useful for neuron-type computation models. In switching from one state to another they used about one-tenth as much energy as a state-of-the-art computing system needs to move data from the processing unit to the memory.

However, this is still about 10,000 times as much energy as the minimum a biological synapse needs in order to fire**. The researchers hope to attain neuron-level energy efficiency once they test the artificial synapse in smaller devices.

Linking to live organic neurons

This new artificial synapse may one day be part of a brain-like computer, which could be especially beneficial for computing that works with visual and auditory signals. Examples of this are seen in voice-controlled interfaces and driverless cars. Past efforts in this field have produced high-performance neural networks supported by artificially intelligent algorithms but these depend on energy-consuming traditional computer hardware.

Every part of the device is made of inexpensive organic materials. These aren’t found in nature but they are largely composed of hydrogen and carbon and are compatible with the brain’s chemistry. Cells have been grown on these materials and they have even been used to make artificial pumps for neural transmitters. The switching voltages applied to train the artificial synapse (about 0.5 mV) are also the same as those that move through human neurons — about 1,000 times lower than the “write” voltage for a typical memristor.

That means it’s possible that the artificial synapse could communicate with live neurons, leading to improved brain-machine interfaces. The softness and flexibility of the device also lends itself to being used in biological environments.

This research was funded by the National Science Foundation, the Keck Faculty Scholar Funds, the Neurofab at Stanford, the Stanford Graduate Fellowship, Sandia’s Laboratory-Directed Research and Development Program, the U.S. Department of Energy, the Holland Scholarship, the University of Groningen Scholarship for Excellent Students, the Hendrik Muller National Fund, the Schuurman Schimmel-van Outeren Foundation, the Foundation of Renswoude (The Hague and Delft), the Marco Polo Fund, the Instituto Nacional de Ciência e Tecnologia/Instituto Nacional de Eletrônica Orgânica in Brazil, the Fundação de Amparo à Pesquisa do Estado de São Paulo and the Brazilian National Council.

* “A resistive memory device has not yet been demonstrated with adequate electrical characteristics to fully realize the efficiency and performance gains of a neural architecture. State-of-the-art memristors suffer from excessive write noise, write non-linearities, and high write voltages and currents.  Reducing the noise and lowering the switching voltage significantly below 0.3 V (~10 kT) in a two-terminal device without compromising long-term data retention has proven difficult.” … Organic memristive devices have been recently proposed, but are limited by “the slow kinetics of ion diffusion through a polymer to retain their states or on charge storage in metal nanoparticles, which inherently limits performance and stability.” — Yoeri van de Burgt et al., Nature Materials

** ENODe switches at low voltage and energy (< 10 pJ for 1000-square-micrometer devices), compared to an estimated ∼ 1–100 fJ per synaptic event for the human brain.
 

Abstract of A non-volatile organic electrochemical device as a low-voltage artificial synapse for neuromorphic computing

The brain is capable of massively parallel information processing while consuming only ~1–100 fJ per synaptic event. Inspired by the efficiency of the brain, CMOS-based neural architectures and memristors are being developed for pattern recognition and machine learning. However, the volatility, design complexity and high supply voltages for CMOS architectures, and the stochastic and energy-costly switching of memristors complicate the path to achieve the interconnectivity, information density, and energy efficiency of the brain using either approach. Here we describe an electrochemical neuromorphic organic device (ENODe) operating with a fundamentally different mechanism from existing memristors. ENODe switches at low voltage and energy (<10 pJ for 103 μm2 devices), displays >500 distinct, non-volatile conductance states within a ~1 V range, and achieves high classification accuracy when implemented in neural network simulations. Plastic ENODes are also fabricated on flexible substrates enabling the integration of neuromorphic functionality in stretchable electronic systems. Mechanical flexibility makes ENODes compatible with three-dimensional architectures, opening a path towards extreme interconnectivity comparable to the human brain.

Brain-computer interface advance allows paralyzed people to type almost as fast as some smartphone users

Typing with your mind. You are paralyzed. But now, tiny electrodes have been surgically implanted in your brain to record signals from your motor cortex, the brain region controlling muscle movement. As you think of mousing over to a letter (or clicking to choose it), those electrical brain signals are transmitted via a cable to a computer (replacing your spinal cord and muscles). There, advanced algorithms decode the complex electrical brain signals, converting them instantly into screen actions. (credit: Chethan Pandarinath et al./eLife)

Stanford University researchers have developed a brain-computer interface (BCI) system that can enable people with paralysis* to type (using an on-screen cursor) at speeds and accuracy levels of about three times faster than reported to date.

Simply by imagining their own hand movements, one participant was able to type 39 correct characters per minute (about eight words per minute); the other two participants averaged 6.3 and 2.7 words per minute, respectively — all without auto-complete assistance (so it could be much faster).

Those are communication rates that people with arm and hand paralysis would also find useful, the researchers suggest. “We’re approaching the speed at which you can type text on your cellphone,” said Krishna Shenoy, PhD, professor of electrical engineering, a co-senior author of the study, which was published in an open-access paper online Feb. 21 in eLife.

Braingate and beyond

The three study participants used a brain-computer interface called the “BrainGate Neural Interface System.” On KurzweilAI, we first discussed Braingate in 2011, followed by a 2012 clinical trial that allowed a paralyzed patient to control a robot.

Braingate in 2012 (credit: Brown University)

The new research, led by Stanford, takes the Braingate technology way further**. Participants can now move a cursor (by just thinking about a hand movement) on a computer screen that displays the letters of the alphabet, and they can “point and click” on letters, computer-mouse-style, to type letters and sentences.

The new BCI uses a tiny silicon chip, just over one-sixth of an inch square, with 100 electrodes that penetrate the brain to about the thickness of a quarter and tap into the electrical activity of individual nerve cells in the motor cortex.

As the participant thinks of a specific hand-to-mouse movement (pointing at or clicking on a letter), neural electrical activity is recorded using 96-channel silicon microelectrode arrays implanted in the hand area of the motor cortex. These signals are then filtered to extract multiunit spiking activity and high-frequency field potentials, then decoded (using two algorithms) to provide “point-and-click” control of a computer cursor.

What’s next

The team next plans is to adapt the system so that brain-computer interfaces can control commercial computers, phones and tablets — perhaps extending out to the internet.

Beyond that, Shenoy predicted that a self-calibrating, fully implanted wireless BCI system with no required caregiver assistance and no “cosmetic impact” would be available in five to 10 years from now (“closer to five”).

Perhaps a future wireless, noninvasive version could let anyone simply think to select letters, words, ideas, and images — replacing the mouse and finger touch — along the lines of Elon Musk’s neural lace concept?

* Millions of people with paralysis reside in the U.S.

** The study’s results are the culmination of the long-running multi-institutional BrainGate consortium, which includes scientists at Massachusetts General Hospital, Brown University, Case Western University, and the VA Rehabilitation Research and Development Center for Neurorestoration and Neurotechnology in Providence, Rhode Island. The study was funded by the National Institutes of Health, the Stanford Office of Postdoctoral Affairs, the Craig H. Neilsen Foundation, the Stanford Medical Scientist Training Program, Stanford BioX-NeuroVentures, the Stanford Institute for Neuro-Innovation and Translational Neuroscience, the Stanford Neuroscience Institute, Larry and Pamela Garlick, Samuel and Betsy Reeves, the Howard Hughes Medical Institute, the U.S. Department of Veterans Affairs, the MGH-Dean Institute for Integrated Research on Atrial Fibrillation and Stroke and Massachusetts General Hospital.


Stanford | Stanford researchers develop brain-controlled typing for people with paralysis


Abstract of High performance communication by people with paralysis using an intracortical brain-computer interface

Brain-computer interfaces (BCIs) have the potential to restore communication for people with tetraplegia and anarthria by translating neural activity into control signals for assistive communication devices. While previous pre-clinical and clinical studies have demonstrated promising proofs-of-concept (Serruya et al., 2002; Simeral et al., 2011; Bacher et al., 2015; Nuyujukian et al., 2015; Aflalo et al., 2015; Gilja et al., 2015; Jarosiewicz et al., 2015; Wolpaw et al., 1998; Hwang et al., 2012; Spüler et al., 2012; Leuthardt et al., 2004; Taylor et al., 2002; Schalk et al., 2008; Moran, 2010; Brunner et al., 2011; Wang et al., 2013; Townsend and Platsko, 2016; Vansteensel et al., 2016; Nuyujukian et al., 2016; Carmena et al., 2003; Musallam et al., 2004; Santhanam et al., 2006; Hochberg et al., 2006; Ganguly et al., 2011; O’Doherty et al., 2011; Gilja et al., 2012), the performance of human clinical BCI systems is not yet high enough to support widespread adoption by people with physical limitations of speech. Here we report a high-performance intracortical BCI (iBCI) for communication, which was tested by three clinical trial participants with paralysis. The system leveraged advances in decoder design developed in prior pre-clinical and clinical studies (Gilja et al., 2015; Kao et al., 2016; Gilja et al., 2012). For all three participants, performance exceeded previous iBCIs (Bacher et al., 2015; Jarosiewicz et al., 2015) as measured by typing rate (by a factor of 1.4–4.2) and information throughput (by a factor of 2.2–4.0). This high level of performance demonstrates the potential utility of iBCIs as powerful assistive communication devices for people with limited motor function.

NASA announces Wed. news conference on ‘discovery beyond our solar system’

Artist’s concept of exoplanet Kepler-452b, the first near-Earth-size world to be found in the habitable zone of a star similar to our Sun. (credit: NASA Ames/JPL-Caltech/T. Pyle)

NASA will hold a news conference at 1 p.m. EST Wednesday, Feb. 22, to present new findings on exoplanets — planets that orbit stars other than our sun. As of Feb. 21, NASA has discovered and confirmed 3,440 exoplanets.

The briefing participants are Thomas Zurbuchen, associate administrator of the Science Mission Directorate at NASA Headquarters in Washington; Michael Gillon, astronomer at the University of Liege in Belgium; Sean Carey, manager of NASA’s Spitzer Science Center at Caltech/IPAC, Pasadena, California; Nikole Lewis, astronomer at the Space Telescope Science Institute in Baltimore; and Sara Seager, professor of planetary science and physics at Massachusetts Institute of Technology, Cambridge. Details of the findings are embargoed by the journal Nature until 1 p.m.

Interestingly, Seager, who studies bio signatures in exoplanet atmospheres, has suggested that two inhabited planets could reasonably turn up during the next decade, based on her modified version of the Drake equation, Space.com notes. Her equation focuses on the search for planets with biosignature gases — gases produced by life that can accumulate in a planet atmosphere to levels that can be detected with remote space telescopes.

“If we can identify another Earth-like planet, it comes full circle, from thinking that everything revolves around our planet to knowing that there are lots of other Earths out there,” she has stated.

The event will air live on NASA Television and will be live-streamed. The public may ask questions during the briefing on Twitter using the hashtag #askNASA. A Reddit AMA (Ask Me Anything) about exoplanets will be held following the briefing at 3 p.m., with scientists available to answer questions in English and Spanish.


NASA Jet Propulsion Laboratory

Manipulating silicon atoms to create future ultra-fast, ultra-low-power chip technology

Model showing interactions between atomic-force microscope tip (top) and silicon surface (hydrogen: white; silicon: tan and red), using a new technique for coating the tip with hydrogen — part of a study to create future electronic circuits at the atomic level. (credit: Wolkow Lab)

Imagine a hybrid silicon-molecular computer that uses one thousand times less energy or a cell phone battery that lasts weeks at a time.

University of Alberta scientists, headed by University of Alberta physics professor Robert Wolkow, have taken a major step in that direction by visualizing and geometrically patterning silicon at the atomic level — using an innovative  atomic-force microscopy* (AFM) technique. The goal: chip technology that performs dramatically better than today’s CMOS architecture.

(Left) Ball-and-stick theoretical model of the pentacene molecule. (Right) AFM image of pentacene molecule showing the pattern of the bonds in the model. The five hexagonal carbon rings are resolved clearly and even the carbon-hydrogen bonds (white in the model) are imaged. Scale bar: 5 angstroms (0.5 nanometer) (credit: IBM Zurich)

Visualizing bonds in atoms at atomic resolution was first achieved by IBM Zurich scientists in 2009, when they imaged the pentacene molecule on copper. But imaging silicon is a problem: the sharp tip damages the fragile silicon molecules, the researchers note in an open-access paper published in the February 13, 2017 issue of Nature Communications.

To avoid damaging the silicon surface, the researchers created the first hydrogen-covered AFM tip, making it possible to manipulate silicon atoms. It was “a bit like Goldilocks,” PhD student and co-author Taleana Huff explained to KurzweilAI. “There is a sweet-spot region where you are probing the surface without interacting with it. Getting close enough to the surface with just the right parameters allows you to see these bonds materialize.

Bob Wolkow and Taleana Huff patterning and imaging electronic circuits at the atomic level (credit: Wolkow Lab)

“If you get too close though, you end up transferring atoms to the surface or, conversely, to the tip, ruining the experiment. A lot of tech and knowledge goes into getting all these settings just right, including a powerful new computational approach that analyzes and verifies the identity of the atoms and bonds.”

Hydrogen-terminated silicon for ultra-fast, ultra-low-power technology

“We see hydrogen-terminated silicon as the platform for a whole new paradigm of efficient and fast silicon-based electronics,” Huff said. “Now that we understand the surface intimately and have these powerful tools and the experience, the next step is to start using the AFM to look at computational elements made using quantum dots [nanoscale semiconductor particles], which we create by removing hydrogen atoms from the silicon surface. When we cleverly pattern them geometrically, these atomic silicon quantum dots can be used to make very fast and incredibly low-power computational patterns.”

The long-term goal is making ultra-fast and ultra-low-power silicon-based circuits that potentially consume one thousand times less power than what is currently on the market, according to the researchers, along with novel quantum applications.

* Typical atomic force microscope (AFM) setup

To image a surface, an AFM sharp tip scans across the sample to detect irregularities in the surface, which cause deflection of the tip and the connected cantilever and generating a topological map of the sample surface. The deflection is measured by reflecting a laser beam off the backside of the cantilever. (credit: CC/Opensource Handbook of Nanoscience and Nanotechnology)


Wolkow Lab | An animation illustrating patterning and imagining electronic circuits at the atomic level. It shows the tip and surface atoms’ relaxation during calculations of a part of the image simulation at small tip-surface distance. The bending and rotation of bonds is visible, giving a sense of the interactions and atomic relaxations involved.


UAlbertaScience | Less is more for atomic-scale manufacturing

This animation represents an electrical current being switched on and off. Remarkably, the current is confined to a channel that is just one atom wide. Also, the switch is made of just one atom. When the atom in the center feels an electric field tugging at it, it loses its electron. Once that electron is lost, the many electrons in the body of the silicon (to the left) have a clear passage to flow through. When the electric field is removed, an electron gets trapped in the central atom, switching the current off. This represents the latest work out of Robert Wolkow’s lab at the University of Alberta.


Abstract of Indications of chemical bond contrast in AFM images of a hydrogen-terminated silicon surface

The origin of bond-resolved atomic force microscope images remains controversial. Moreover, most work to date has involved planar, conjugated hydrocarbon molecules on a metal substrate thereby limiting knowledge of the generality of findings made about the imaging mechanism. Here we report the study of a very different sample; a hydrogen-terminated silicon surface. A procedure to obtain a passivated hydrogen-functionalized tip is defined and evolution of atomic force microscopy images at different tip elevations are shown. At relatively large tip-sample distances, the topmost atoms appear as distinct protrusions. However, on decreasing the tip-sample distance, features consistent with the silicon covalent bonds of the surface emerge. Using a density functional tight-binding-based method to simulate atomic force microscopy images, we reproduce the experimental results. The role of the tip flexibility and the nature of bonds and false bond-like features are discussed.

How to build your own bio-bot

Bio-bot design inspired by the muscle-tendon-bone complex found in the human body, with 3D-printed flexible skeleton. Optical stimulation of the muscle tissue (orange), which is genetically engineered to contract in response to blue light, makes the bio-bot walk across a surface in the direction of the light. (credit: Ritu Raman et al./Nature Protocols)

For the past several years, researchers at the University of Illinois at Urbana-Champaign have reverse-engineered native biological tissues and organs — creating tiny walking “bio-bots” powered by muscle cells and controlled with electrical and optical pulses.

Now, in an open-access cover paper in Nature Protocols, the researchers are sharing a protocol with engineering details for their current generation of millimeter-scale soft robotic bio-bots*.

Using 3D-printed skeletons, these devices would be coupled to tissue-engineered skeletal muscle actuators to drive locomotion across 2D surfaces, and could one day be used for studies of muscle development and disease, high-throughput drug testing, and dynamic implants, among other applications.

In a new design, the researchers worked with MIT optogenetics experts to genetically engineer a light-responsive skeletal muscle cell line that could be stimulated to contract by pulses of blue light. (credit: Ritu Raman et al./Nature Protocols)

The future of bio-bots

The researchers envision future generations of bio-bots as biological building blocks that lead to the machines of the future. The bio-bots would integrate multiple cell and tissue types, including neuronal networks for sensing and processing, and vascular networks for delivery of nutrients and other biochemical factors. They might also have some of the higher-order properties of biological materials, such as self-organization and self-healing.

“These next iterations of biohybrid machines could, for example, be designed to sense chemical toxins, locomote toward them, and neutralize them through cell-secreted factors. Such a functionality could have broad relevance in medical diagnostics and targeted therapeutics in vivo, or even be extended to environmental use as a method of cleaning pathogens from public water supplies,” the research note in the paper.

“This protocol is essentially intended to be a one-stop reference for any scientist around the world who wants to replicate the results we showed in our PNAS 2016 and PNAS 2014 papers, and give them a framework for building their own bio-bots for a variety of applications,” said Bioengineering Professor Rashid Bashir**, who heads the bio-bots research group.

Bashir’s group has been a pioneer in designing and building bio-bots, less than a centimeter in size, made of flexible 3D printed hydrogels and living cells. In 2012, the group demonstrated bio-bots that could “walk” on their own, powered by beating heart cells from rats. In 2014, they switched to muscle cells controlled with electrical pulses, giving researchers unprecedented command over their function.

* Not to be confused with swimming biobots and rescue biobots using remotely controlled cockroaches.

** Bashir is also Grainger Distinguished Chair in Engineering and head of the Department of Bioengineering. Work on the bio-bots was conducted at the Micro + Nanotechnology Lab at Illinois.


NewsAtIllinois | Light illuminates the way for bio-bots


Abstract of A modular approach to the design, fabrication, and characterization of muscle-powered biological machines

Biological machines consisting of cells and biomaterials have the potential to dynamically sense, process, respond, and adapt to environmental signals in real time. As a first step toward the realization of such machines, which will require biological actuators that can generate force and perform mechanical work, we have developed a method of manufacturing modular skeletal muscle actuators that can generate up to 1.7 mN (3.2 kPa) of passive tension force and 300 μN (0.56 kPa) of active tension force in response to external stimulation. Such millimeter-scale biological actuators can be coupled to a wide variety of 3D-printed skeletons to power complex output behaviors such as controllable locomotion. This article provides a comprehensive protocol for forward engineering of biological actuators and 3D-printed skeletons for any design application. 3D printing of the injection molds and skeletons requires 3 h, seeding the muscle actuators takes 2 h, and differentiating the muscle takes 7 d.