Glowing nanoparticles open new window for live optical biological imaging

(a) High-resolution, high-speed quantum-dot shortwave infrared imaging was used to image the blood-vessel network of a mouse glioblastoma brain tumor (b) at 60 frames per second and to compare it to the blood-vessel network (c) in the opposite (healthy) brain hemisphere. (credit: Oliver T. Bruns et al./ Nature Biomedical Engineering)

A team of researchers has created bright, glowing nanoparticles called quantum dots that can be injected into the body, where they emit light at shortwave infrared (SWIR) wavelengths that pass through the skin — allowing internal body structures such as fine networks of blood vessels to be imaged in vivo (in live animals) on high-speed video cameras for the first time.

The new findings are described in an open-access paper in the journal Nature Biomedical Engineering by Moungi Bawendi, MIT Lester Wolf Professor of Chemistry, and 22 other researchers.*

Near-infrared imaging for research on biological tissues, with wavelengths between 700 and 900 nanometers (billionths of a meter), is widely used because these wavelengths can shine through tissues. But wavelengths of around 1,000 to 2,000 nanometers have the potential to provide even better results, because body tissues are more transparent at that longer light-wavelength range.

The problem in doing that has been the lack of light-emitting materials that could work at those longer wavelengths and that were bright enough to be easily detected through the surrounding skin and muscle tissues.

Live internal images of awake, moving mice

Contact-free video monitoring of heart and respiratory rate in mice using quantum dots covered with biocompatible lipid molecules and injected into mice. A newly developed camera is highly sensitive to shortwave infrared light. (credit: Oliver T. Bruns et al./ Nature Biomedical Engineering)

Now the team has succeeded in making particles that are “orders of magnitude better than previous materials, and that allow unprecedented detail in biological imaging,” says lead author Oliver T. Bruns, an MIT research scientist. The synthesis of these new particles was initially described in an open-access paper by researchers from the Bawendi group in Nature Communications last year.

These new light-emitting nanoparticles are the first that are bright enough to allow imaging of internal organs in mice that are awake and moving, as opposed to previous methods that required them to be anesthetized, Bruns says. Initial applications would be for preclinical research in animals, as the compounds contain some materials, such as indium arsenide, that are unlikely to be approved for use in humans. The researchers are also working on developing versions that would be safer for humans.

Quantum dots, made of semiconductor materials, emit light whose frequency can be precisely tuned by controlling the exact size and composition of the particles. These were functionalized via three distinct surface coatings that tailor the physiological properties for specific shortwave infrared imaging applications. The quantum dots are so bright, their emissions can be captured with very short exposure times. That makes it possible to produce not just single images but video that captures details of motion, such as the flow of blood — making it possible to distinguish between veins and arteries. (credit: Oliver T. Bruns et al./ Nature Biomedical Engineering)

Not only can the new method determine the direction of blood flow, Bruns says, it is detailed enough to track individual blood cells within that flow. “We can track the flow in each and every capillary, at super-high speed,” he says. “We can get a quantitative measure of flow, and we can do such flow measurements at very high resolution, over large areas.”

Such imaging could potentially be used, for example, to study how the blood flow pattern in a tumor changes as the tumor develops, which might lead to new ways of monitoring disease progression or responsiveness to a drug treatment. “This could give a good indication of how treatments are working that was not possible before,” he says.

* The team included members from Harvard Medical School, the Harvard T.H. Chan School of Public Health, Raytheon Vision Systems, and University Medical Center in Hamburg, Germany. The work was supported by the National Institutes of Health, the National Cancer Institute, the National Foundation for Cancer Research, the Warshaw Institute for Pancreatic Cancer Research, the Massachusetts General Hospital Executive Committee on Research, the Army Research Office through the Institute for Soldier Nanotechnologies at MIT, the U.S. Department of Defense, and the National Science Foundation.

Patient moves paralyzed legs with help from electrical stimulation of spinal cord

Electrical stimulation of the spinal cord (credit: Mayo Clinic)

Electrical stimulation of the spinal cord and intense physical therapy have been used by Mayo Clinic researchers to help Jared Chinnock intentionally move his paralyzed legs, stand, and make steplike motions for the first time in three years. The chronic traumatic paraplegia case marks the first time a patient has intentionally controlled previously paralyzed functions within the first two weeks of stimulation.

The case was documented April 3, 2017 in an open-access paper in Mayo Clinic Proceedings. The researchers say these results offer further evidence that a combination of this technology and rehabilitation may help patients with spinal cord injuries regain control over previously paralyzed movements, such as steplike actions, balance control, and standing.

“We’re really excited, because our results went beyond our expectations,” says neurosurgeon Kendall Lee, M.D., Ph.D., principal investigator and director of Mayo Clinic’s Neural Engineering Laboratory. “These are initial findings, but the patient is continuing to make progress.”

Chinnock injured his spinal cord at the sixth thoracic vertebrae in the middle of his back three years earlier. He was diagnosed with a “motor complete spinal cord injury,” meaning he could not move or feel anything below the middle of his torso.

Electrical stimulation

The study started with the patient going through 22 weeks of physical therapy. He had three training sessions a week to prepare his muscles for attempting tasks during spinal cord stimulation, and was tested for changes regularly. Some results led researchers to characterize his injury further as “discomplete,” suggesting dormant connections across his injury may remain.

Following physical therapy, he underwent surgery to implant an electrode in the epidural space near the spinal cord below the injured area. The electrode is connected to a computer-controlled device under the skin in the patient’s abdomen that which sends electrical current to the spinal cord, enabling the patient to create movement.*

The data suggest that people with discomplete spinal cord injuries may be candidates for epidural stimulation therapy, but more research is needed into how a discomplete injury contributes to recovering function, the researchers note.

After a three-week recovery period from surgery, the patient resumed physical therapy with stimulation settings adjusted to enable movements. In the first two weeks, he intentionally was able to control his muscles while lying on his side, resulting in leg movements, make steplike motions while lying on his side and standing with partial support, and stand independently using his arms on support bars for balance. Intentional (volitional) movement means the patient’s brain is sending a signal to motor neurons in his spinal cord to move his legs purposefully. (credit: Mayo Clinic)

* The Mayo Clinic received permission from the FDA for off-label use.  The Mayo researchers worked closely with the team of V. Reggie Edgerton, Ph.D., at UCLA on this study, which replicates earlier research done at the University of Louisville. Teams from Mayo Clinic’s departments of Neurosurgery and Physical Medicine and Rehabilitation, and the Division of Engineering collaborated on this project. The research was funded by Craig H. Neilsen Foundation, Jack Jablonski BEL13VE in Miracles Foundation, Mayo Clinic Center for Clinical and Translational Sciences, Mayo Clinic Rehabilitation Medicine Research Center, Mayo Clinic Transform the Practice, and The Grainger Foundation.


Mayo Clinic | Researchers Strive to Help Paralyzed Man Make Strides – Mayo Clinic


Mayo Clinic |Epidural Stimulation Enables Motor Function After Chronic Paraplegia


Abstract of Enabling Task-Specific Volitional Motor Functions via Spinal Cord Neuromodulation in a Human With Paraplegia

We report a case of chronic traumatic paraplegia in which epidural electrical stimulation (EES) of the lumbosacral spinal cord enabled (1) volitional control of task-specific muscle activity, (2) volitional control of rhythmic muscle activity to produce steplike movements while side-lying, (3) independent standing, and (4) while in a vertical position with body weight partially supported, voluntary control of steplike movements and rhythmic muscle activity. This is the first time that the application of EES enabled all of these tasks in the same patient within the first 2 weeks (8 stimulation sessions total) of EES therapy.

Neural probes for the spinal cord

Researchers have developed a rubber-like fiber, shown here, that can flex and stretch while simultaneously delivering both optical impulses for optoelectronic stimulation,and electrical connections for stimulation and monitoring. (credit: Chi (Alice) Lu and Seongjun Park)

A research team led by MIT scientists has developed rubbery fibers for neural probes that can flex and stretch and be implanted into the mouse spinal cord.

The goal is to study spinal cord neurons and ultimately develop treatments to alleviate spinal cord injuries in humans. That requires matching the stretchiness, softness, and flexibility of the spinal cord. In addition, the fibers have to deliver optical impulses (for optoelectronic stimulation of neurons with blue or yellow laser light) and have electrical connections (for electrical stimulation and monitoring of neurons).

Implantable fibers have allowed brain researchers to stimulate specific targets in the brain and monitor electrical responses. But similar studies in the nerves of the spinal cord have been more difficult to carry out. That’s because the spine flexes and stretches as the body moves, and the relatively stiff, brittle fibers used today could damage the delicate spinal cord tissue.

The scientists used a newly developed elastomer (a tough elastic polymer material that can flow and be stretched) that is transparent (like a fiber optic cable) for transmitting optical signals, and formed an external mesh coating of silver nanowires as a conductive layer for electrical signals. Think of it as tough, transparent, silver spaghetti.

Fabrication of flexible neural probes. (A) Thermal (heat) drawing produced a flexible optical fiber that also served as a structural core for the probe. (B) Spool of a fiber transparent polycarbonate (PC) core and cyclic olefin copolymer (COC) cladding, which enabled the fiber to be drawn into a fiber and was dissolved away after the drawing process. (C) Transmission electron microscopy (TEM) image of silver nanowires (AgNW). (D) Cross-sectional image of the fiber probe with biocompatible polydimethylsiloxane (PDMS) coating. (E) Scanning electron microscopy image showing a portion of the ring silver nanowire electrode cross section. (F) Scanning electron microscopy image of the silver nanowire mesh on top of the fiber surface. (credit: Chi Lu et al./Science Advances)

The fibers are “so floppy, you could use them to do sutures and deliver light  at the same time,” says MIT Professor Polina Anikeeva. The fiber can stretch by at least 20 to 30 percent without affecting its properties, she says. “Eventually, we’d like to be able to use something like this to combat spinal cord injury. But first, we have to have biocompatibility and to be able to withstand the stresses in the spinal cord without causing any damage.”

Scientists doing research on spinal cord injuries or disease usually must use larger animals in their studies, because the larger nerve fibers can withstand the more rigid wires used for stimulus and recording. While mice are generally much easier to study and available in many genetically modified strains, there was previously no technology that allowed them to be used for this type of research.

The fibers are not only stretchable but also very flexible. (credit: Chi (Alice) Lu and Seongjun Park)

The team included researchers at the University of Washington and Oxford University. The research was supported by the National Science Foundation, the National Institute of Neurological Disorders and Stroke, the U.S. Army Research Laboratory, and the U.S. Army Research Office through the Institute for Soldier Nanotechnologies at MIT.


Abstract of Flexible and stretchable nanowire-coated fibers for optoelectronic probing of spinal cord circuits

Studies of neural pathways that contribute to loss and recovery of function following paralyzing spinal cord injury require devices for modulating and recording electrophysiological activity in specific neurons. These devices must be sufficiently flexible to match the low elastic modulus of neural tissue and to withstand repeated strains experienced by the spinal cord during normal movement. We report flexible, stretchable probes consisting of thermally drawn polymer fibers coated with micrometer-thick conductive meshes of silver nanowires. These hybrid probes maintain low optical transmission losses in the visible range and impedance suitable for extracellular recording under strains exceeding those occurring in mammalian spinal cords. Evaluation in freely moving mice confirms the ability of these probes to record endogenous electrophysiological activity in the spinal cord. Simultaneous stimulation and recording is demonstrated in transgenic mice expressing channelrhodopsin 2, where optical excitation evokes electromyographic activity and hindlimb movement correlated to local field potentials measured in the spinal cord.

Graphene-based neural probe detects brain activity at high resolution and signal quality

16 flexible graphene transistors (inset) integrated into a flexible neural probe enable electrical signals from neurons to be measured at high resolution and signal quality. (credit: ICN2)

Researchers from the European Graphene Flagship* have developed a new microelectrode array neural probe based on graphene field-effect transistors (FETs) for recording brain activity at high resolution while maintaining excellent signal-to-noise ratio (quality).

The new neural probe could lay the foundation for a future generation of in vivo neural recording implants, for patients with epilepsy, for example, and for disorders that affect brain function and motor control, the researchers suggest. It could possibly play a role in Elon Musk’s just-announced Neuralink “neural lace” research project.

Measuring neural activity with high precision

(Left) Representation of the graphene implant placed on the surface of the rat’s brain. (Right) microscope image of a multielectrode array with conventional platinum electrodes (a) vs. the miniature graphene device next to it (b). Scale bar is 1.25 mm. (credit:  Benno M. Blaschke et al./ 2D Mater.)

Neural activity is measured by detecting the electric fields generated when neurons fire. These fields are highly localized, so ultra-small measuring devices that can be densely packed are required for accurate brain readings.

The new device has an microelectrode array of 16 graphene-based transistors arranged on a flexible substrate that can conform to the brain’s surface. Graphene provides biocompatibility, chemical stability, flexibility, and excellent electrical properties, which make it attractive for use in medical devices, especially for brain activity, the researchers suggest.**

(For a state-of-the-art example of microelectrode array use in the brain, see “Brain-computer interface advance allows paralyzed people to type almost as fast as some smartphone users.”)

Schematic of the head of a graphene implant showing a graphene transistor array and feed lines. (Inset): cross section of a graphene transistor with graphene between the source and drain contacts, which are covered by an insulating polyimide photoresist. (credit:  Benno M. Blaschke et al./ 2D Mater.)

In an experiment with rats, the researchers used the new devices to record brain activity during sleep and in response to visual light stimulation.

The graphene transistor probes showed good spatial discrimination (identifying specific locations) of the brain activity and outperformed state-of-the-art platinum electrode arrays, with higher signal amplification and a better signal-to-noise performance when scaled down to very small sizes.

That means the graphene transistor probes can be more densely packed and at higher resolution, features that are vital for precision mapping of brain activity. And since the probes have transistor amplifiers built in, they remove the need for the separate pre-amplification required with metal electrodes.

Neural probes are placed directly on the surface of the brain, so safety is important. The researchers determined that the flexible graphene-based probes are non-toxic, did not induce any significant inflammation, and are long-lasting.

“Graphene neural interfaces have shown already a great potential, but we have to improve on the yield and homogeneity of the device production in order to advance towards a real technology,” said Jose Antonio Garrido, who led the research at the Catalan Institute of Nanoscience and Nanotechnology in Spain.

“Once we have demonstrated the proof of concept in animal studies, the next goal will be to work towards the first human clinical trial with graphene devices during intraoperative mapping of the brain. This means addressing all regulatory issues associated to medical devices such as safety, biocompatibility, etc.”

The research was published in the journal 2D Materials.

* With a budget of €1 billion, the Graphene Flagship consortium consists of more than 150 academic and industrial research groups in 23 countries. Launched in 2013, the goal is to take graphene from the realm of academic laboratories into European society within 10 years. The research was a collaborative effort involving Flagship partners Technical University of Munich (TU Munich. Germany), Institut d’Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS, Spain), Spanish National Research Council (CSIC, Spain), The Biomedical Research Networking Center in Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN, Spain) and the Catalan Institute of Nanoscience and Nanotechnology (ICN2, Spain).

** “Using multielectrode arrays for high-density recordings presents important drawbacks. Since the electrode impedance and noise are inversely proportional to the electrode size, a trade-off between spatial resolution and signal-to-noise ratio has to be made. Further, the very small voltages of the recorded signals are highly susceptible to noise in the standard electrode configuration. [That requires preamplification, which means] the fabrication complexity is significantly increased and the additional electrical components required for the voltage-to-current conversion limit the integration density. … Metal-oxide-semiconductor field-effect transistors (MOSFETs) where the gate metal is replaced with an electrolyte and an electrode, referred to as “solution-gated field-effect transistors (SGFETs) or electrolyte-gated field-effect transistors, can be exposed directly to neurons and be used to record action potentials with high fidelity. … Although the potential of graphene-based SGFET technology has been suggested in in vitro studies, so far no in vivo confirmation has been demonstrated. Here we present the fabrication of flexible arrays of graphene SGFETs and demonstrate in vivo mapping of spontaneous slow waves, as well as visually evoked and pre-epileptic activity in the rat.” — Benno M. Blaschke et al./2D Mater.


Abstract of Mapping brain activity with flexible graphene micro-transistors

Establishing a reliable communication interface between the brain and electronic devices is of paramount importance for exploiting the full potential of neural prostheses. Current microelectrode technologies for recording electrical activity, however, evidence important shortcomings, e.g. challenging high density integration. Solution-gated field-effect transistors (SGFETs), on the other hand, could overcome these shortcomings if a suitable transistor material were available. Graphene is particularly attractive due to its biocompatibility, chemical stability, flexibility, low intrinsic electronic noise and high charge carrier mobilities. Here, we report on the use of an array of flexible graphene SGFETs for recording spontaneous slow waves, as well as visually evoked and also pre-epileptic activity in vivo in rats. The flexible array of graphene SGFETs allows mapping brain electrical activity with excellent signal-to-noise ratio (SNR), suggesting that this technology could lay the foundation for a future generation of in vivo recording implants.

Musk launches company to pursue ‘neural lace’ brain-interface technology

image credit | Bloomberg

Elon Musk has launched a California-based company called Neuralink Corp., The Wall Street Journal reported today (Monday, March 27, 2017), citing people familiar with the matter, to pursue “neural lace” brain-interface technology.

Neural lace would help prevent humans from becoming “house cats” to AI, he suggests. “I think one of the solutions that seems maybe the best is to add an AI layer,” Musk hinted at the Code Conference last year. It would be a “digital layer above the cortex that could work well and symbiotically with you.

“We are already a cyborg,” he added. “You have a digital version of yourself online in form of emails and social media. … But the constraint is input/output — we’re I/O bound … particularly output. … Merging with digital intelligence revolves around … some sort of interface with your cortical neurons.”

Reflecting concepts that have been proposed by Ray Kurzweil, “over time I think we will probably see a closer merger of biological intelligence and digital intelligence,” Musk said at the recent World Government Summit in Dubai.

Musk suggested the neural lace interface could be inserted via veins and arteries.

Image showing mesh electronics being injected through sub-100 micrometer inner diameter glass needle into aqueous solution. (credit: Lieber Research Group, Harvard University)

KurzweilAI reported on one approach to a neural-lace-like brain interface in 2015. A “syringe-injectable electronics” concept was invented by researchers in Charles Lieber’s lab at Harvard University and the National Center for Nanoscience and Technology in Beijing. It would involve injecting a biocompatible polymer scaffold mesh with attached microelectronic devices into the brain via syringe.

The process for fabricating the scaffold is similar to that used to etch microchips, and begins with a dissolvable layer deposited on a biocompatible nanoscale polymer mesh substrate, with embedded nanowires, transistors, and other microelectronic devices attached. The mesh is then tightly rolled up, allowing it to be sucked up into a syringe via a thin (100 micrometers internal diameter) glass needle. The mesh can then be injected into brain tissue by the syringe.

The input-output connection of the mesh electronics can be connected to standard electronics devices (for voltage insertion or measurement, for example), allowing the mesh-embedded devices to be individually addressed and used to precisely stimulate or record individual neural activity.

A schematic showing in vivo stereotaxic injection of mesh electronics into a mouse brain (credit: Jia Liu et al./Nature Nanotechnology)

Lieber’s team has demonstrated this in live mice and verified continuous monitoring and recordings of brain signals on 16 channels. “We have shown that mesh electronics with widths more than 30 times the needle ID can be injected and maintain a high yield of active electronic devices … little chronic immunoreactivity,” the researchers said in a June 8, 2015 paper in Nature Nanotechnology. “In the future, our new approach and results could be extended in several directions, including the incorporation of multifunctional electronic devices and/or wireless interfaces to further increase the complexity of the injected electronics.”

This technology would require surgery, but would not have the accessibility limitation of the blood-brain barrier with Musk’s preliminary concept. For direct delivery via the bloodstream, it’s possible that the nanorobots conceived by Robert A. Freitas, Jr. (and extended to interface with the cloud, as Ray Kurzweil has suggested) might be appropriate at some point in the future.

“Neuralink has reportedly already hired several high profile academics in the field of neuroscience: flexible electrodes and nano technology expert  Venessa Tolosa, PhD; UCSF professor Philip Sabes, PhD, who also participated in the Musk-sponsored Beneficial AI conference; and Boston University professor Timothy Gardner, PhD, who studies neural pathways in the brains of songbirds,” Engadget reports.

UPDATE Mar. 28, 2017:

 


Recode | We are already cyborgs | Elon Musk | Code Conference 2016

Deep-brain imaging using minimally invasive surgical needle and laser light

An image of cells taken inside a mouse brain using a new minimally invasive, inexpensive method to take high-resolution brain pictures (credit: Rajesh Menon)

Using just a simple inexpensive micro-thin glass surgical needle and laser light, University of Utah engineers have developed an inexpensive way to take high-resolution pictures of a mouse brain, minimizing tissue damage — a process they believe could lead to a much less invasive method for humans.

Typically, researchers must either surgically take a sample of the animal’s brain to examine the cells under a microscope or use an endoscope, which can be 10 to 100 times thicker than a needle.

Schematic of new “computational-cannula microscopy” process. Light from a yellow-green laser (right) is focused by lenses and shines into the brain (experimental sample brain tissue is shown here), causing the cells captured by the cannula (needle) to glow. That glow is magnified and then recorded by a standard CCD camera. The recorded light is then run through a sophisticated computer algorithm that re-assembles the scattered light waves into a 2D or potentially, even a 3D picture. (credit: Ganghun Kim et al./Scientific Reports)

With the new “computational-cannula microscopy” process, the small (220 micrometers) width of the cannula allows for minimally invasive imaging, while the long length (>2 mm*) allows for deep-brain imaging of features of about 3.5 micrometers in size. Since no (slow) scanning is involved, video at the native frame rate of the camera can be achieved, allowing for capturing near-real-time live videos (currently, it takes less than one fifth of a second to compute each frame on a desktop computer).

In the case of mice, researchers use optogenetics (genetically modify the animals so that only the cells they want to see glow under this laser light), but Utah electrical and computer engineering associate professor Rajesh Menon, who led the research, believes the new process can potentially be developed for human patients. That would create a simpler, less invasive, and less expensive method than endoscopes, and it could be used for other organs.

Menon and his team have been working with the U. of U.’s renowned Nobel-winning researcher, Distinguished Professor of Biology and Human Genetics Mario Capecchi, and Jason Shepherd, assistant professor of neurobiology and anatomy.

The research is documented in the latest issue of open-access Scientific Reports.

* “With three-photon microscopy, penetration depth of up to 1.2 mm was recently reported. However, three- or multi-photon excitation is extremely inefficient due to the low absorption cross-section, which requires large excitation intensities leading to potential for photo-toxicity. Furthermore, many interesting biological features lie at depths greater than 1.2 mm from the surface of the brain such as the basal ganglia, hippocampus, and the hypothalamus.” — Ganghun Kim et al./Scientific Reports


Abstract of Deep-brain imaging via epi-fluorescence Computational Cannula Microscopy

Here we demonstrate widefield (field diameter = 200 μm) fluorescence microscopy and video imaging inside the rodent brain at a depth of 2 mm using a simple surgical glass needle (cannula) of diameter 0.22 mm as the primary optical element. The cannula guides excitation light into the brain and the fluorescence signal out of the brain. Concomitant image-processing algorithms are utilized to convert the spatially scrambled images into fluorescent images and video. The small size of the cannula enables minimally invasive imaging, while the long length (>2 mm) allow for deep-brain imaging with no additional complexity in the optical system. Since no scanning is involved, widefield fluorescence video at the native frame rate of the camera can be achieved.

Transcranial alternating current stimulation used to boost working memory

The fMRI scans show that stimulation “in beat” increases brain activity in the regions involved in task performance. On the other hand, stimulation “out of beat” showed activity in regions usually associated with resting. (credit: Ines Violante)

In a study published Tuesday Mar. 14 in the open-access journal eLife, researchers at Imperial College London found that applying transcranial alternating current stimulation (TACS) through the scalp helped to synchronize brain waves in different areas of the brain, enabling subjects to perform better on tasks involving short-term working memory.

The hope is that the approach could one day be used to bypass damaged areas of the brain and relay signals in people with traumatic brain injury, stroke, or epilepsy.

“What we observed is that people performed better when the two waves had the same rhythm and at the same time,” said Ines Ribeiro Violante, PhD, a neuroscientist in the Department of Medicine at Imperial, who led the research. The current gave a performance boost to the memory processes used when people try to remember names at a party, telephone numbers, or even a short grocery list.

Keeping the beat

Violante and team targeted two brain regions — the middle frontal gyrus and the inferior parietal lobule — which are known to be involved in working memory.

Ten volunteers were asked to carry out a set of memory tasks of increasing difficulty while receiving theta frequency stimulation to the two brain regions at slightly different times (unsynchronised), at the same time (synchronous), or only a quick burst (sham) to give the impression of receiving full treatment.

In the working memory experiments, participants looked at a screen on which numbers flashed up and had to remember if a number was the same as the previous, or in the case of the harder trial, if the current number matched that of two-numbers previous.

Results showed that when the brain regions were stimulated in sync, reaction times on the memory tasks improved, especially on the harder of the tasks requiring volunteers to hold two strings of numbers in their minds.

“The classic behavior is to do slower on the harder cognitive task, but people performed faster with synchronized stimulation and as fast as on the simpler task,” said Violante.

Previous studies have shown that brain stimulation with electromagnetic waves or electrical current can have an effect on brain activity, the field has remained controversial due to a lack of reproducibility. But using functional MRI to image the brain enabled the team to show changes in activity occurring during stimulation.

“The results show that when the stimulation was in sync, there was an increase in activity in those regions involved in the task. When it was out of sync the opposite effect was seen,” Violante explained.

Clinical use

“The next step is to see if the brain stimulation works in patients with brain injury, in combination with brain imaging, where patients have lesions which impair long range communication in their brains,” said Violante. “The hope is that it could eventually be used for these patients, or even those who have suffered a stroke or who have epilepsy.

The researchers also plan to combine brain stimulation with cognitive training to see if it restores lost skills.

The research was funded by the Wellcome Trust.


Abstract of Externally induced frontoparietal synchronization modulates network dynamics and enhances working memory performance

Cognitive functions such as working memory (WM) are emergent properties of large-scale network interactions. Synchronisation of oscillatory activity might contribute to WM by enabling the coordination of long-range processes. However, causal evidence for the way oscillatory activity shapes network dynamics and behavior in humans is limited. Here we applied transcranial alternating current stimulation (tACS) to exogenously modulate oscillatory activity in a right frontoparietal network that supports WM. Externally induced synchronization improved performance when cognitive demands were high. Simultaneously collected fMRI data reveals tACS effects dependent on the relative phase of the stimulation and the internal cognitive processing state. Specifically, synchronous tACS during the verbal WM task increased parietal activity, which correlated with behavioral performance. Furthermore, functional connectivity results indicate that the relative phase of frontoparietal stimulation influences information flow within the WM network. Overall, our findings demonstrate a link between behavioral performance in a demanding WM task and large-scale brain synchronization.

A biocompatible stretchable material for brain implants and ‘electronic skin’

A printed electrode pattern of a new polymer being stretched to several times of its original length (top), and a transparent, highly stretchy “electronic skin” patch (bottom) from the same material, forming an intimate interface with the human skin to potentially measure various biomarkers (credit: Bao Lab)

Stanford chemical engineers have developed a soft, flexible plastic electrode that stretches like rubber but carries electricity like wires — ideal for brain interfaces and other implantable electronics, they report in an open-access March 10 paper in Science Advances.

Developed by Zhenan Bao, a professor of chemical engineering, and his team, the material is still a laboratory prototype, but the team hopes to develop it as part of their long-term focus on creating flexible materials that interface with the human body.

Flexible interface

“One thing about the human brain that a lot of people don’t know is that it changes volume throughout the day,” says postdoctoral research fellow Yue Wang, the first author on the paper. “It swells and de-swells.” The current generation of electronic implants can’t stretch and contract with the brain, making it complicated to maintain a good connection.

Illustration showing incorporation of ionic liquid-assisted stretchability and electrical conductivity (STEC) enhancers to convert conventional PEDOT:PSS film (top) to stretchable film (bottom). (credit: Wang et al., Sci. Adv.)

To create this flexible electrode, the researchers began with a plastic (PEDOT:PSS) with high electrical conductivity and biocompatibility (could be safely brought into contact with the human body), but was brittle. So they added a “STEC” (stretchability and electrical conductivity) molecule similar to the kind of additives used to thicken soups in industrial kitchens.

This additive transformed the plastic’s chunky and brittle molecular structure into a fishnet pattern with holes in the strands to allow the material to stretch and deform. The resulting plastic remained very conductive even when stretched 800 percent its original length.

Scientists at SLAC National Accelerator Laboratory, UCLA, the Materials Science Institute of Barcelona, and Samsung Advanced Institute of Technology were also involved in the research, which was funded by Samsung Electronics and the Air Force Office of Science Research.


Stanford University School of Engineering | Stretchable electrodes pave way for flexible electronics


Abstract of A highly stretchable, transparent, and conductive polymer

Previous breakthroughs in stretchable electronics stem from strain engineering and nanocomposite approaches. Routes toward intrinsically stretchablemolecularmaterials remain scarce but, if successful,will enable simpler fabrication processes, such as direct printing and coating, mechanically robust devices, and more intimate contact with objects. We report a highly stretchable conducting polymer, realized with a range of enhancers that serve dual functions to changemorphology andas conductivity-enhancingdopants inpoly(3,4-ethylenedioxythiophene):poly(styrenesulfonate) (PEDOT:PSS). The polymer films exhibit conductivities comparable to the best reported values for PEDOT:PSS, with higher than 3100 S/cm under 0% strain and higher than 4100 S/cm under 100% strain—among the highest for reported stretchable conductors. It is highly durable under cyclic loading,with the conductivitymaintained at 3600 S/cm even after 1000 cycles to 100% strain. The conductivity remained above 100 S/cm under 600% strain, with a fracture strain as high as 800%, which is superior to even the best silver nanowire– or carbon nanotube–based stretchable conductor films. The combination of excellent electrical andmechanical properties allowed it to serve as interconnects for field-effect transistor arrays with a device density that is five times higher than typical lithographically patterned wavy interconnects.

Brain has more than 100 times higher computational capacity than previously thought, say UCLA scientists

Neuron (blue) with dendrites (credit: Shelley Halpain/UC San Diego)

The brain has more than 100 times higher computational capacity than was previously thought, a UCLA team has discovered.

Obsoleting neuroscience textbooks, this finding suggests that our brains are both analog and digital computers and could lead to new approaches for treating neurological disorders and developing brain-like computers, according to the researchers.

Illustration of neuron and dendrites. Dendrites receive electrochemical stimulation (via synapses, not shown here) from neurons (not shown here), and propagate that stimulation to the neuron cell body (soma). A neuron sends electrochemical stimulation via an axon to communicate with other neurons via telodendria (purple, right) at the end of the axon and synapses (not shown here). (credit: Quasar/CC).

Dendrites have been considered simple passive conduits of signals. But by working with animals that were moving around freely, the UCLA team showed that dendrites are in fact electrically active — generating nearly 10 times more spikes than the soma (neuron cell body).

Fundamentally changes our understanding of brain computation

The finding, reported in the March 9 issue of the journal Science, challenges the long-held belief that spikes in the soma are the primary way in which perception, learning and memory formation occur.

“Dendrites make up more than 90 percent of neural tissue,” said UCLA neurophysicist Mayank Mehta, the study’s senior author. “Knowing they are much more active than the soma fundamentally changes the nature of our understanding of how the brain computes information.”

“This is a major departure from what neuroscientists have believed for about 60 years,” said Mehta, a UCLA professor of physics and astronomy, of neurology and of neurobiology.

Because the dendrites are nearly 100 times larger in volume than the neuronal centers, Mehta said, the large number of dendritic spikes taking place could mean that the brain has more than 100 times the computational capacity than was previously thought.

Study with moving rats made discovery possible

Previous studies have been limited to stationary rats, because scientists have found that placing electrodes in the dendrites themselves while the animals were moving actually killed those cells. But the UCLA team developed a new technique that involves placing the electrodes near, rather than in, the dendrites.

Using that approach, the scientists measured dendrites’ activity for up to four days in rats that were allowed to move freely within a large maze. Taking measurements from the posterior parietal cortex, the part of the brain that plays a key role in movement planning, the researchers found far more activity in the dendrites than in the somas — approximately five times as many spikes while the rats were sleeping, and up to 10 times as many when they were exploring.

Looking at the soma to understand how the brain works has provided a framework for numerous medical and scientific questions — from diagnosing and treating diseases to how to build computers. But, Mehta said, that framework was based on the understanding that the cell body makes the decisions, and that the process is digital.

“What we found indicates that such decisions are made in the dendrites far more often than in the cell body, and that such computations are not just digital, but also analog,” Mehta said. “Due to technological difficulties, research in brain function has largely focused on the cell body. But we have discovered the secret lives of neurons, especially in the extensive neuronal branches. Our results substantially change our understanding of how neurons compute.”

Funding was provided by the University of California.

Complete neuron cell diagram (credit: LadyofHats/CC)


Abstract of Dynamics of cortical dendritic membrane potential and spikes in freely behaving rats

Neural activity in vivo is primarily measured using extracellular somatic spikes, which provide limited information about neural computation. Hence, it is necessary to record from neuronal dendrites, which generate dendritic action potentials (DAP) and profoundly influence neural computation and plasticity. We measured neocortical sub- and suprathreshold dendritic membrane potential (DMP) from putative distal-most dendrites using tetrodes in freely behaving rats over multiple days with a high degree of stability and sub-millisecond temporal resolution. DAP firing rates were several fold larger than somatic rates. DAP rates were modulated by subthreshold DMP fluctuations which were far larger than DAP amplitude, indicting hybrid, analog-digital coding in the dendrites. Parietal DAP and DMP exhibited egocentric spatial maps comparable to pyramidal neurons. These results have important implications for neural coding and plasticity.

How to control robots with your mind

The robot is informed that its initial motion was incorrect based upon real-time decoding of the observer’s EEG signals, and it corrects its selection accordingly to properly sort an object (credit: Andres F. Salazar-Gomez et al./MIT, Boston University)

Two research teams are developing new ways to communicate with robots and shape them one day into the kind of productive workers featured in the current AMC TV show HUMANS (now in second season).

Programming robots to function in a real-world environment is normally a complex process. But now a team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Boston University is creating a system that lets people correct robot mistakes instantly by simply thinking.

In the initial experiment, the system uses data from an electroencephalography (EEG) helmet to correct robot performance on an object-sorting task. Novel machine-learning algorithms enable the system to classify brain waves within 10 to 30 milliseconds.

The system includes a main experiment controller, a Baxter robot, and an EEG acquisition and classification system. The goal is to make the robot pick up the cup that the experimenter is thinking about. An Arduino computer (bottom) relays messages between the EEG system and robot controller. A mechanical contact switch (yellow) detects robot arm motion initiation. (credit: Andres F. Salazar-Gomez et al./MIT, Boston University)

While the system currently handles relatively simple binary-choice activities, we may be able one day to control robots in much more intuitive ways. “Imagine being able to instantaneously tell a robot to do a certain action, without needing to type a command, push a button, or even say a word,” says CSAIL Director Daniela Rus. “A streamlined approach like that would improve our abilities to supervise factory robots, driverless cars, and other technologies we haven’t even invented yet.”

The team used a humanoid robot named “Baxter” from Rethink Robotics, the company led by former CSAIL director and iRobot co-founder Rodney Brooks.


MITCSAIL | Brain-controlled Robots

Intuitive human-robot interaction

The system detects brain signals called “error-related potentials” (generated whenever our brains notice a mistake) to determine if the human agrees with a robot’s decision.

“As you watch the robot, all you have to do is mentally agree or disagree with what it is doing,” says Rus. “You don’t have to train yourself to think in a certain way — the machine adapts to you, and not the other way around.” Or if the robot’s not sure about its decision, it can trigger a human response to get a more accurate answer.

The team believes that future systems could extend to more complex multiple-choice tasks. The system could even be useful for people who can’t communicate verbally: the robot could be controlled via a series of several discrete binary choices, similar to how paralyzed locked-in patients spell out words with their minds.

The project was funded in part by Boeing and the National Science Foundation. An open-access paper will be presented at the IEEE International Conference on Robotics and Automation (ICRA) conference in Singapore this May.

Here, robot, Fetch!

Robot asks questions, and based on a person’s language and gesture, infers what item to deliver. (credit: David Whitney/Brown University)

But what if the robot is still confused? Researchers in Brown University’s Humans to Robots Lab have an app for that.

“Fetching objects is an important task that we want collaborative robots to be able to do,” said computer science professor Stefanie Tellex. “But it’s easy for the robot to make errors, either by misunderstanding what we want, or by being in situations where commands are ambiguous. So what we wanted to do here was come up with a way for the robot to ask a question when it’s not sure.”

Tellex’s lab previously developed an algorithm that enables robots to receive speech commands as well as information from human gestures. But it ran into problems when there were lots of very similar objects in close proximity to each other. For example, on the table above, simply asking for “a marker” isn’t specific enough, and it might not be clear which one a person is pointing to if a number of markers are clustered close together.

“What we want in these situations is for the robot to be able to signal that it’s confused and ask a question rather than just fetching the wrong object,” Tellex explained.

The new algorithm does just that, enabling the robot to quantify how certain it is that it knows what a user wants. When its certainty is high, the robot will simply hand over the object as requested. When it’s not so certain, the robot makes its best guess about what the person wants, then asks for confirmation by hovering its gripper over the object and asking, “this one?”


David Whitney | Reducing Errors in Object-Fetching Interactions through Social Feedback

One of the important features of the system is that the robot doesn’t ask questions with every interaction; it asks intelligently.

And even though the system asks only a very simple question, it’s able to make important inferences based on the answer. For example, say a user asks for a marker and there are two markers on a table. If the user tells the robot that its first guess was wrong, the algorithm deduces that the other marker must be the one that the user wants, and will hand that one over without asking another question. Those kinds of inferences, known as “implicatures,” make the algorithm more efficient.

In future work, Tellex and her team would like to combine the algorithm with more robust speech recognition systems, which might further increase the system’s accuracy and speed. “Currently we do not consider the parse of the human’s speech. We would like the model to understand prepositional phrases (‘on the left,’ ‘nearest to me’). This would allow the robot to understand how items are spatially related to other items through language.”

Ultimately, Tellex hopes, systems like this will help robots become useful collaborators both at home and at work.

An open-access paper on the DARPA-funded research will also be presented at the International Conference on Robotics and Automation.


Abstract of Correcting Robot Mistakes in Real Time Using EEG Signals

Communication with a robot using brain activity from a human collaborator could provide a direct and fast
feedback loop that is easy and natural for the human, thereby enabling a wide variety of intuitive interaction tasks. This paper explores the application of EEG-measured error-related potentials (ErrPs) to closed-loop robotic control. ErrP signals are particularly useful for robotics tasks because they are naturally occurring within the brain in response to an unexpected error. We decode ErrP signals from a human operator in real time to control a Rethink Robotics Baxter robot during a binary object selection task. We also show that utilizing a secondary interactive error-related potential signal generated during this closed-loop robot task can greatly improve classification performance, suggesting new ways in which robots can acquire human feedback. The design and implementation of the complete system is described, and results are presented for realtime
closed-loop and open-loop experiments as well as offline analysis of both primary and secondary ErrP signals. These experiments are performed using general population subjects that have not been trained or screened. This work thereby demonstrates the potential for EEG-based feedback methods to facilitate seamless robotic control, and moves closer towards the goal of real-time intuitive interaction.


Abstract of Reducing Errors in Object-Fetching Interactions through Social Feedback

Fetching items is an important problem for a social robot. It requires a robot to interpret a person’s language and gesture and use these noisy observations to infer what item to deliver. If the robot could ask questions, it would help the robot be faster and more accurate in its task. Existing approaches either do not ask questions, or rely on fixed question-asking policies. To address this problem, we propose a model that makes assumptions about cooperation between agents to perform richer signal extraction from observations. This work defines a mathematical framework for an itemfetching domain that allows a robot to increase the speed and accuracy of its ability to interpret a person’s requests by reasoning about its own uncertainty as well as processing implicit information (implicatures). We formalize the itemdelivery domain as a Partially Observable Markov Decision Process (POMDP), and approximately solve this POMDP in real time. Our model improves speed and accuracy of fetching tasks by asking relevant clarifying questions only when necessary. To measure our model’s improvements, we conducted a real world user study with 16 participants. Our method achieved greater accuracy and a faster interaction time compared to state-of-theart baselines. Our model is 2.17 seconds faster (25% faster) than a state-of-the-art baseline, while being 2.1% more accurate.