Imaging study shows you (and your fluid intelligence) can be identified by your brain activity

A connectome maps connections between different brain networks. (credit: Emily Finn)

Your brain activity appears to be as unique as your fingerprints, a new Yale-led “connectome fingerprinting” study published Monday (Oct. 12) in the journal Nature Neuroscience has found.

By analyzing* “connectivity profiles” (coordinated activity between pairs of brain regions) of fMRI (functional magnetic resonance imaging) images from 126 subjects, the Yale researchers were able to identify specific individuals from the fMRI data alone by their identifying “fingerprint.” The researchers could also assess the subjects’ “fluid intelligence.”

“In most past studies, data have been used to draw contrasts between, say, patients and healthy controls,” said Emily Finn, a Ph.D. student in neuroscience and co-first author of the paper. “We have learned a lot from these sorts of studies, but they tend to obscure individual differences, which may be important.”

Two frontoparietal networks networks — the medial frontal (purple) and the frontoparietal (teal) — out of the 268 brain regions were found best for identifying people and predicting fluid intelligence (credit: Emily S Finn/Xilin Shen, CC BY-ND)

The researchers looked specifically at areas that showed synchronized activity. The characteristic connectivity patterns were distributed throughout the brain, but notably, two frontoparietal networks emerged as most distinctive.

“These networks are comprised of higher-order association cortices rather than primary sensory regions; these cortical regions are also the most evolutionarily recent and show the highest inter-subject variance,” the researchers note in their paper. “These networks tend to act as flexible hubs, switching connectivity patterns according to task demands. Additionally, broadly distributed across-network connectivity has been reported in these same regions, suggesting a role in large-scale coordination of brain activity.”

Notably, the researchers were able to match the scan of a given individual’s brain activity during one imaging session to that person’s brain scan at another time — even when a person was engaged in a different task in each session, although in that case, the predictive accuracy dropped from 98–99% accuracy to 80–90%.

Predicting and treating neuropsychiatric illnesses (or criminal behavior?)

Finn said she hopes that this ability might one day help clinicians predict or even treat neuropsychiatric diseases based on individual brain connectivity profiles. The paper notes that “aberrant functional connectivity in the frontoparietal networks has been linked to a variety of neuropsychiatric illnesses.”

The study raises troubling questions. “Richard Haier, an intelligence researcher at the University of California, Irvine, [suggests that ] schools could scan children to see what sort of educational environment they’d thrive in, or determine who’s more prone to addiction, or screen prison inmates to figure out whether they’re violent or not,” Wired reports.

“Minority Report” Hawk-eye display (credit: Fox)

Or perhaps identify future criminals — or even predict future crimes, as in “Hawk-eye” technology (portrayed in Minority Report episode 3).

Identifying fluid intelligence

The researchers also discovered that the same two frontoparietal networks were most predictive of the level of fluid intelligence (the capacity for on-the-spot reasoning to discern patterns and solve problems, independent of acquired knowledge) shown on intelligence tests. That’s consistent with previous reports that structural and functional properties of these networks relate to intelligence.

Data for the study came from the Human Connectome Project led by the WU-Minn Consortium, which is funded by the 16 National Institutes of Health (NIH) Institutes and Centers that support the NIH Blueprint for Neuroscience Research, and by the McDonnell Center for Systems Neuroscience at Washington University. Primary funding for the Yale researchers was provided by the NIH.

* Finn and co-first author Xilin Shen, under the direction of R. Todd Constable, professor of diagnostic radiology and neurosurgery at Yale, compiled fMRI data from 126 subjects who underwent six scan sessions over two days. Subjects performed different cognitive tasks during four of the sessions. In the other two, they simply rested. Researchers looked at activity in 268 brain regions: specifically, coordinated activity between pairs of regions. Highly coordinated activity implies two regions are functionally connected. Using the strength of these connections across the whole brain, the researchers were able to identify individuals from fMRI data alone, whether the subject was at rest or engaged in a task. They were also able to predict how subjects would perform on tasks.


Abstract of Functional connectome fingerprinting: identifying individuals using patterns of brain connectivity

Functional magnetic resonance imaging (fMRI) studies typically collapse data from many subjects, but brain functional organization varies between individuals. Here we establish that this individual variability is both robust and reliable, using data from the Human Connectome Project to demonstrate that functional connectivity profiles act as a ‘fingerprint’ that can accurately identify subjects from a large group. Identification was successful across scan sessions and even between task and rest conditions, indicating that an individual’s connectivity profile is intrinsic, and can be used to distinguish that individual regardless of how the brain is engaged during imaging. Characteristic connectivity patterns were distributed throughout the brain, but the frontoparietal network emerged as most distinctive. Furthermore, we show that connectivity profiles predict levels of fluid intelligence: the same networks that were most discriminating of individuals were also most predictive of cognitive behavior. Results indicate the potential to draw inferences about single subjects on the basis of functional connectivity fMRI.

A battery alternative to costly, rare lithium

Potassium ions (purple) are compatible with graphite electrodes (black) and can function in a charge-discharge cycle, chemists have now shown (credit: Oregon State University)

Overturning nearly a century of a scientific dogma, Oregon State University chemists have now shown that  potassium could potentially replace rare, costly lithium in a new potassium-ion battery.

“For decades, people have assumed that potassium couldn’t work with graphite or other bulk carbon anodes in a battery,” said Xiulei (David) Ji, the lead author of the study and an assistant professor of chemistry in the College of Science at Oregon State University. “That assumption is incorrect,” he said. “It’s really shocking that no one ever reported on this issue for 83 years.”

The findings are important, the researchers say, because they open some new alternatives for batteries that can work with well-established, inexpensive graphite as the anode (the high-energy reservoir of electrons).

Lithium is quite rare, found in only 0.0017 percent, by weight, of the Earth’s crust. Because of that, it’s comparatively expensive, and also difficult to recycle.

Cost, availability problems with lithium

“The cost-related problems with lithium are sufficient that you won’t really gain much with economies of scale,” Ji said. “With most products, as you make more of them, the cost goes down. With lithium the reverse may be true in the near future. So we have to find alternatives.”

That alternative, he said, may be potassium, which is 880 times more abundant in the Earth’s crust than lithium. The new findings show that it can work effectively with graphite or soft carbon in the anode of an electrochemical battery.

“It’s safe to say that the energy density (amount of electrical power per unit volume) of a potassium-ion battery may never exceed that of lithium-ion batteries,” he said. But potassium-ion batteries may provide a “long cycling life, a high power density [ability to discharge quickly], a lot lower cost, and be ready to take the advantage of the existing manufacturing processes of carbon anode materials.”

Electrical energy storage in batteries is essential not only for consumer products such as cell phones and computers, but also in transportation, industry power backup, micro-grid storage, and for the wider use of renewable energy.

The Journal of the American Chemical Society published the findings from this discovery, which was supported by the U.S. Department of Energy. A patent is pending on the new technology.


Abstract of Carbon Electrodes for K-Ion Batteries

We for the first time report electrochemical potassium insertion in graphite in a nonaqueous electrolyte, which can exhibit a high reversible capacity of 273 mAh/g. Ex situ XRD studies confirm that KC36, KC24, and KC8 sequentially form upon potassiation, whereas depotassiation recovers graphite through phase transformations in an opposite sequence. Graphite shows moderate rate capability and relatively fast capacity fading. To improve the performance of carbon K-ion anodes, we synthesized a nongraphitic soft carbon that exhibits cyclability and rate capability much superior to that of graphite. This work may open up a new paradigm toward rechargeable K-ion batteries.

This deep-learning method could predict your daily activities from your lifelogging camera images

Example images from dataset of 40,000 egocentric images with their respective labels. The classes are representative of the number of images per class for the dataset. (credit: Daniel Castro et al./Georgia Institute of Technology)

Georgia Institute of Technology researchers have developed a deep-learning method that uses a wearable smartphone camera to track a person’s activities during a day. It could lead to more powerful Siri-like personal assistant apps and tools for improving health.

In the research, the camera took more than 40,000 pictures (one every 30 to 60 seconds) over a six-month period. The researchers taught a computer to categorize these pictures across 19 activity classes, including cooking, eating, watching TV, working, spending time with family, and driving. The test subject wearing the camera could review and annotate the photos at the end of each day (deleting any necessary for privacy) to ensure that they were correctly categorized.

Wearable smartphone camera used in the research (credit: Daniel Castro et al.)

It knows what you are going to do next

The method was then able to determine with 83 percent accuracy what activity a person was doing at a given time, based only on the images.

The researchers believe they have gathered the largest annotated dataset of first-person images used to demonstrate that deep-learning can “understand” human behavior and the habits of a specific person.

The researchers believe that within the next decade we will have ubiquitous devices that can improve our personal choices throughout the day.*

“Imagine if a device could learn what I would be doing next — ideally predict it — and recommend an alternative?” says Daniel Casto, a Ph.D. candidate in Computer Science and a lead researcher on the project, who helped present the method earlier this month at UBICOMP 2015 in Osaka, Japan. “Once it builds your own schedule by knowing what you are doing, it might tell you there is a traffic delay and you should leave sooner or take a different route.”

That could be based on a future version of a smartphone app like Waze, which allows drivers to share real-time traffic and road info. In possibly related news, Apple Inc. recently acquired Perceptio, a startup developing image-recognition systems for smartphones, using deep learning.

The open-access research, which was conducted in the School of Interactive Computing and the Institute for Robotics and Intelligent Machines at Georgia Tech, can be found here.

* Or not. “As more consumers purchase wearable tech, they unknowingly expose themselves to both potential security breaches and ways that their data may be legally used by companies without the consumer ever knowing,” TechRepublic notes.


Abstract of  Predicting Daily Activities From Egocentric Images Using Deep Learning

We present a method to analyze images taken from a passive egocentric wearable camera along with the contextual information, such as time and day of week, to learn and predict everyday activities of an individual. We collected a dataset of 40,103 egocentric images over a 6 month period with 19 activity classes and demonstrate the benefit of state-of-theart deep learning techniques for learning and predicting daily activities. Classification is conducted using a Convolutional Neural Network (CNN) with a classification method we introduce called a late fusion ensemble. This late fusion ensemble incorporates relevant contextual information and increases our classification accuracy. Our technique achieves an overall accuracy of 83.07% in predicting a person’s activity across the 19 activity classes. We also demonstrate some promising results from two additional users by fine-tuning the classifier with one day of training data.

New ‘optoelectrode’ probe is potential neuroscience-technology breakthrough

Device for multichannel intracortical neural recording and optical stimulation. (a) A single optoelectrode structure. The zinc oxide (ZnO) shank is electrically insulated except for the active tip area, and shanks are isolated from each other by polymer adhesive. (b) Electron microscope image of the microscopically smooth tip with the recording area, covered by a final indium-tin oxide (ITO) conducting overlayer. (c) A 4 × 4 micro-optoelectrode array device flip-chip bonded on thin, flexible and semitransparent polyimide electrical cable. (credit: Joonhee Lee et al./Nature Methods)

Brown University School of Engineering and Seoul National University researchers have combined optoelectronics and intracortical neural recording for the first time — enabling neuroscientists to optically stimulate neuron activity while simultaneously recording the effects of the stimulation on associated neural microcircuits.

Described in the journal Nature Methods, the new compact, integrated device uses a semiconductor called zinc oxide, which is optically transparent yet able to conduct an electrical current. That makes it possible to both stimulate and detect with the same material.

The chip is just a few millimeters square with sixteen micrometer-sized pin-like “optoelectrodes,” each capable of both delivering light pulses at micrometer scale and sensing electrical current. The array of optoelectrodes also enables the device to couple to neural microcircuits composed of many neurons rather than single neurons and with millisecond precision.

“We think this is a window-opener,” said Joonhee Lee, a senior research associate in Professor Arto Nurmikko’s lab and one of the lead authors of the new paper. “The ability to rapidly perturb neural circuits according to specific spatial patterns and at the same time reconstruct how the circuits involved are perturbed, is in our view a substantial advance.”

First introduced around 2005, optogenetics involves genetically engineering neurons to express light-sensitive proteins on their membranes. With those proteins expressed, pulses of light can be used to either promote or suppress activity in those particular cells. The method gives researchers, in principle, unprecedented ability to control specific brain cells at specific times.

But until now, simultaneous optogenetic stimulation and recording of brain activity rapidly across multiple points within a brain microcircuit of interest has proven difficult. It requires a device that can both generate a spatial pattern of light pulses and detect the dynamical patterns of electrical reverberations generated by excited cellular activity.

Previous attempts to do this involved devices that cobbled together separate components for light emission and electrical sensing. Such probes were physically bulky, which is not ideal for insertion into a brain. And because the emitters and the sensors were necessarily at hundreds of micrometers apart, a sizable distance, the link between stimulation and recorded signal was not reliable.

The researchers’ next steps are developing a wireless version and using the technology as a chronic implant in non-human primates at potentially hundreds of points and, depending on future progress in worldwide research on optogenetics, perhaps even one day in humans.


Abstract of Transparent intracortical microprobe array for simultaneous spatiotemporal optical stimulation and multichannel electrical recording

Optogenetics, the selective excitation or inhibition of neural circuits by light, has become a transformative approach for dissecting functional brain microcircuits, particularly in in vivorodent models, owing to the expanding libraries of opsins and promoters. Yet there is a lack of versatile devices that can deliver spatiotemporally patterned light while performing simultaneous sensing to map the dynamics of perturbed neural populations at the network level. We have created optoelectronic actuator and sensor microarrays that can be used as monolithic intracortical implants, fabricated from an optically transparent, electrically highly conducting semiconductor ZnO crystal. The devices can perform simultaneous light delivery and electrical readout in precise spatial registry across the microprobe array. We applied the device technology in transgenic mice to study light-perturbed cortical microcircuit dynamics and their effects on behavior. The functionality of this device can be further expanded to optical imaging and patterned electrical microstimulation.

Stephen Hawking on AI

Stephen Hawking on Last Week Tonight with John Oliver (credit: HBO)

Reddit published Stephen Hawking’s answers to questions in an “Ask me anything” (AMA) event on Thursday (Oct. 8).

Most of the answers focused on his concerns about the future of AI and its role in our future. Here are some of the most interesting ones. The full list is in this Wired article. (His answers to John Oliver below are funnier.)

The real risk with AI isn’t malice but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.

There’s no consensus among AI researchers about how long it will take to build human-level AI and beyond, so please don’t trust anyone who claims to know for sure that it will happen in your lifetime or that it won’t happen in your lifetime. When it eventually does occur, it’s likely to be either the best or worst thing ever to happen to humanity, so there’s huge value in getting it right. We should shift the goal of AI from creating pure undirected artificial intelligence to creating beneficial intelligence. It might take decades to figure out how to do this, so let’s start researching this today rather than the night before the first strong AI is switched on.

An AI that has been designed rather than evolved can in principle have any drives or goals. However, as emphasized by Steve Omohundro, an extremely intelligent future AI will probably develop a drive to survive and acquire more resources as a step toward accomplishing whatever goal it has, because surviving and having more resources will increase its chances of accomplishing that other goal. This can cause problems for humans whose resources get taken away.

If machines produce everything we need, the outcome will depend on how things are distributed. Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution. So far, the trend seems to be toward the second option, with technology driving ever-increasing inequality.

Forbes offers a different opinion on the last answer.


HBO | Last Week Tonight with John Oliver: Stephen Hawking Interview

A realistic bio-inspired robotic finger

Heating and cooling a 3D-printed shape memory alloy to operate a robotic finger (credit: Florida Atlantic University/Bioinspiration & Biomimetics)

A realistic 3D-printed robotic finger using a shape memory alloy (SMA) and a unique thermal training technique has been developed by Florida Atlantic University assistant professor Erik Engeberg, Ph.D.

“We have been able to thermomechanically train our robotic finger to mimic the motions of a human finger, like flexion and extension,” said Engeberg. “Because of its light weight, dexterity and strength, our robotic design offers tremendous advantages over traditional mechanisms, and could ultimately be adapted for use as a prosthetic device, such as on a prosthetic hand.”

Most robotic parts used today are rigid, have a limited range of motion and don’t look lifelike.

In the study, described in an open-access article in the journal Bioinspiration & Biomimetics, Engeberg and his team used a resistive heating process called “Joule” heating that involves the passage of electric currents through a conductor that releases heat.

How to create a robotic finger

  • The researchers first downloaded a 3-D computer-aided design (CAD) model of a human finger from the Autodesk 123D website (under creative commons license).
  • With a 3-D printer, they created the inner and outer molds that housed a flexor and extensor actuator and a position sensor. The extensor actuator takes a straight shape when it’s heated and the flexor actuator takes a curved shape when heated.
  • They used SMA plates and a multi-stage casting process to assemble the finger.
  • Electric currents flow through each SMA actuator from an electric power source at the base of the finger as a heating and cooling process to operate the robotic finger.

Results from the study showed a rapid flexing and extending motion of the finger and ability to recover its trained shape accurately and completely, confirming the biomechanical basis of its trained shape.

Initial use in underwater robotics

“Because SMAs require a heating process and cooling process, there are challenges with this technology, such as the lengthy amount of time it takes for them to cool and return to their natural shape, even with forced air convection,” said Engeberg. So they used the technology for underwater robotics, which would provide a rapid-cooling environment.

Engeberg used thermal insulators at the fingertip, which were kept open to facilitate water flow inside the finger. As the finger flexed and extended, water flowed through the inner cavity within each insulator to cool the actuators.

“Because our robotic finger consistently recovered its thermomechanically trained shape better than other similar technologies, our underwater experiments clearly demonstrated that the water cooling component greatly increased the operational speed of the finger,” said Engeberg.

Undersea applications using Engeberg’s new technology could help to address some of the difficulties and challenges humans encounter while working in ocean depths.


FAU – BioRobotics Lab | Bottle Pick and Drop Demo UR10 and Shadow Hand


FAU – BioRobotics Lab | Simultaneous Grasp Synergies Controlled by EMG


FAU – BioRobotics Lab | Shadow Hand and UR10 – Grab Bottle, Pour Liquid


Abstract of Anthropomorphic finger antagonistically actuated by SMA plates

Most robotic applications that contain shape memory alloy (SMA) actuators use the SMA in a linear or spring shape. In contrast, a novel robotic finger was designed in this paper using SMA plates that were thermomechanically trained to take the shape of a flexed human finger when Joule heated. This flexor actuator was placed in parallel with an extensor actuator that was designed to straighten when Joule heated. Thus, alternately heating and cooling the flexor and extensor actuators caused the finger to flex and extend. Three different NiTi based SMA plates were evaluated for their ability to apply forces to a rigid and compliant object. The best of these three SMAs was able to apply a maximum fingertip force of 9.01N on average. A 3D CAD model of a human finger was used to create a solid model for the mold of the finger covering skin. Using a 3D printer, inner and outer molds were fabricated to house the actuators and a position sensor, which were assembled using a multi-stage casting process. Next, a nonlinear antagonistic controller was developed using an outer position control loop with two inner MOSFET current control loops. Sine and square wave tracking experiments demonstrated minimal errors within the operational bounds of the finger. The ability of the finger to recover from unexpected disturbances was also shown along with the frequency response up to 7 rad s−1. The closed loop bandwidth of the system was 6.4 rad s−1 when operated intermittently and 1.8 rad s−1 when operated continuously.

How to grow old brain cells for studying age-related diseases

Salk scientists developed a new technique to grow aged brain cells from patients’ skin. Fibroblasts (cells in connective tissue) from elderly human donors are directly converted into induced neurons, as shown here. (credit: Salk Institute)

Scientists have developed a first-ever technique for using skin samples from older patients to create brain cells — without first rolling back the youthfulness clock in the cells. The new technique, which yields cells resembling those found in older people’s brains, will be a boon to scientists studying age-related diseases like Alzheimer’s and Parkinson’s.

“This lets us keep age-related signatures in the cells so that we can more easily study the effects of aging on the brain,” says Rusty Gage, a professor in the Salk Institute’s Laboratory of Genetics and senior author of the paper, published yesterday (October 8, 2015) in Cell Stem Cell.

“By using this powerful approach, we can begin to answer many questions about the physiology and molecular machinery of human nerve cells — not just around healthy aging but pathological aging as well,” says Martin Hetzer, a Salk professor also involved in the work.

Over the past few years, researchers have increasingly turned to human stem cells (instead of animals) to study various diseases in humans. For example, scientists can take patients’ skin cells and turn them into induced pluripotent stem cells, which have the ability to become any cell in the body. From there, researchers can prompt the stem cells to turn into brain cells for further study. But this process — even when taking skin cells from an older human — doesn’t guarantee stem cells with “older” properties.

Epigenetic signatures in older cells —patterns of chemical marks on DNA that dictate what genes are expressed when — were reset to match younger signatures in the process. This made studying the aging of the human brain difficult, since researchers couldn’t create “old” brain cells with the approach.

Induced neurons

The researchers decided to try another approach, turning to an even newer technique that lets them directly convert skin cells to neurons, creating what’s called an induced neuron. “A few years ago, researchers showed that it’s possible to do this, completely bypassing the stem cell precursor state,” says Jerome Mertens, a postdoctoral research fellow and first author of the new paper.

The scientists collected skin cells from 19 people, aged from birth to 89, and prompted them to turn into brain cells using both the induced pluripotent stem cell technique and the direct conversion approach. Then, they compared the patterns of gene expression in the resulting neurons with cells taken from autopsied brains.

When the induced pluripotent stem cell method was used, as expected, the patterns in the neurons were indistinguishable between young and old derived samples. But brain cells that had been created using the direct conversion technique had different patterns of gene expression depending on whether they were created from young donors or older adults.

“The neurons we derived showed differences depending on donor age,” says Mertens. “And they actually show changes in gene expression that have been previously implicated in brain aging.” For instance, levels of a nuclear pore protein called RanBP17 — whose decline is linked to nuclear transport defects that play a role in neurodegenerative diseases — were lower in the neurons derived from older patients.

Now that the direct conversion of skin cells to neurons has been shown to retain these signatures of age, Gage expects the technique to become a valuable tool for studying aging. And, while the current work only tested its effectiveness in creating brain cells, he suspects a similar method will let researchers create aged heart and liver cells as well.

Scientists at Friederich-Alexander University Erlangen-Nuremberg and Tsinghau University were also involved in the study, which was supported by grants from the G. Harold & Leila Y. Mathers Charitable Foundation, the JPB Foundation, the Leona M. and Harry B. Helmsley Charitable Trust, Annette Merle-Smith, CIRM, the German Federal Ministry of Education and Research, and the Glenn Foundation for Medical Research.


Abstract of Human induced neurons retain aging transcriptome signatures that identify compromised nucleocytoplasmic compartmentalization during aging

Aging is a major risk factor for many human diseases, and in vitro generation of human neurons is an attractive approach for modeling aging-related brain disorders. However, modeling aging in differentiated human neurons has proved challenging. We generated neurons from human donors across a broad range of ages, either by iPSC-based reprogramming and differentiation or by direct conversion into induced neurons (iNs). While iPSCs and derived neurons did not retain aging-associated gene signatures, iNs displayed age-specific transcriptional profiles and revealed age-associated decreases in the nuclear transport receptor RanBP17. We detected an age-dependent loss of nucleocytoplasmic compartmentalization (NCC) in donor fibroblasts and corresponding iNs, and found that reduced RanBP17 impaired NCC in young cells while iPSC rejuvenation restored NCC in aged cells. These results show that iNs retain important aging-related signatures, thus allowing modeling of the aging process in vitro, and identify impaired NCC as an important factor in human aging.

A soft, bio-friendly ’3-D’ brain-implant electrode

A schematic of a “3-D” flexible electrode array. Note the Z-shaped part of the electrode array located between the cranial bone and the brain surface, and the tips (10 micrometers) of the protrusions at the bottom, which serve as recording sites. (credit: Johan Agorelius et al./Front. Neurosci.)

Researchers at Lund University have developed implantable multichannel electrodes that can capture signals from single neurons in the brain over a long period of time — without causing brain tissue damage, making it possible to better understand brain function in both healthy and diseased individuals.

Current flexible electrodes can’t maintain their shape when implanted, which is why they have to be attached to a solid chip. That limits their flexibility and irritates brain tissue, eventually killing surrounding nerve cells and making signals unreliable, says professor Jens Schouenborg.

He explains that recording neuronal signals from the brain requires an electrode that is bio-friendly (doesn’t cause any significant damage to brain tissue) and that is flexible in relation to the brain tissue (the brain floats in fluid inside the skull and moves around whenever a person breathes or turns their head).

“The electrode and the implantation technology that we have now developed have these properties,” he says. Described in an open-access paper in the journal Frontiers in Neuroscience, the new “3-D electrodes” are unique in that they are extremely soft (they even deflect against a water surface) and flexible in all three dimensions, enabling stable recordings from neurons over a long period of time.


Lund University | Breakthrough for electrode implants in the brain

How to implant soft electrodes

But the challenge was how to implant these electrodes in the brain. Visualize pushing spaghetti into a slab of meat. The solution: encapsulating the electrodes in a hard but dissolvable gelatin material, one that is also very gentle on the brain.

“This technology retains the electrodes in their original form inside the brain and can monitor what happens inside virtually undisturbed and normally functioning brain tissue,” said Johan Agorelius, a doctoral student in the project.

This allows for better understanding of what happens inside the brain and for developing more effective treatments for diseases such as Parkinson’s disease and chronic pain conditions, says Schouenborg.


Abstract of An array of highly flexible electrodes with a tailored configuration locked by gelatin during implantation—initial evaluation in cortex cerebri of awake rats

Background: A major challenge in the field of neural interfaces is to overcome the problem of poor stability of neuronal recordings, which impedes long-term studies of individual neurons in the brain. Conceivably, unstable recordings reflect relative movements between electrode and tissue. To address this challenge, we have developed a new ultra-flexible electrode array and evaluated its performance in awake non-restrained animals.

Methods:An array of eight separated gold leads (4 × 10 μm), individually flexible in 3D, were cut from a gold sheet using laser milling and insulated with Parylene C. To provide structural support during implantation into rat cortex, the electrode array was embedded in a hard gelatin based material, which dissolves after implantation. Recordings were made during 3 weeks. At termination, the animals were perfused with fixative and frozen to prevent dislocation of the implanted electrodes. A thick slice of brain tissue, with the electrode array still in situ, was made transparent using methyl salicylate to evaluate the conformation of the implanted electrode array.

Results: Median noise levels and signal/noise remained relatively stable during the 3 week observation period; 4.3–5.9 μV and 2.8–4.2, respectively. The spike amplitudes were often quite stable within recording sessions and for 15% of recordings where single-units were identified, the highest-SNR unit had an amplitude higher than 150 μV. In addition, high correlations (>0.96) between unit waveforms recorded at different time points were obtained for 58% of the electrode sites. The structure of the electrode array was well preserved 3 weeks after implantation.

Conclusions: A new implantable multichannel neural interface, comprising electrodes individually flexible in 3D that retain its architecture and functionality after implantation has been developed. Since the new neural interface design is adaptable, it offers a versatile tool to explore the function of various brain structures.

A new way to create spintronic magnetic information storage

A magnetized cobalt disk (red) placed atop a thin cobalt-palladium film (light purple background) can be made to confer its own ringed configuration of magnetic moments (orange arrows) to the film below, creating a skyrmion in the film (purple arrows). The skyrmion might be usable in computer data storage systems. (credit: Dustin Gilbert / NIST)

Exotic ring-shaped magnetic effects called “skyrmions*” could be the basis for a new type of nonvolatile magnetic computer data storage, replacing current hard-drive technology, according to a team of researchers at the National Institute of Standards and Technology (NIST) and several universities.

Skyrmions have the advantage of operating at magnetic fields that are several orders of magnitude weaker, but have worked at only very low temperatures until now. The research breakthrough was the discovery of a practical way to create and access magnetic skyrmions, and under ambient room-temperature conditions.

The skrymion effect refers to extreme conditions in which certain magnetic materials can develop spots where the magnetic moments** curve and twist, forming a winding, ring-like configuration. To achieve that, the physicists placed arrays of tiny magnetized cobalt disks atop a thin film made of cobalt and palladium. That protects them from outside influence, meaning the data they store would not be corrupted easily.

But “seeing” these skyrmion configurations underneath was a challenge. The team solved that by using neutrons to see through the disk.

That discovery has implications for spintronics (using magnetic spin to store data). “The advantage [with skyrmions] is that you’d need way less power to push them around than any other method proposed for spintronics,” said NIST’s Dustin Gilbert. “What we need to do next is figure out how to make them move around.”

Physicists at the University of California, Davis; University of Maryland, College Park; University of California, Santa Cruz; and Lawrence Berkeley National Laboratory were also involved in the study.

* Named after the physicist who proposed them. 

** The force that a magnet can exert on electric currents and the torque that a magnetic field will exert on it.


Abstract of Realization of ground-state artificial skyrmion lattices at room temperature

The topological nature of magnetic skyrmions leads to extraordinary properties that provide new insights into fundamental problems of magnetism and exciting potentials for novel magnetic technologies. Prerequisite are systems exhibiting skyrmion lattices at ambient conditions, which have been elusive so far. Here, we demonstrate the realization of artificial Bloch skyrmion lattices over extended areas in their ground state at room temperature by patterning asymmetric magnetic nanodots with controlled circularity on an underlayer with perpendicular magnetic anisotropy (PMA). Polarity is controlled by a tailored magnetic field sequence and demonstrated in magnetometry measurements. The vortex structure is imprinted from the dots into the interfacial region of the underlayer via suppression of the PMA by a critical ion-irradiation step. The imprinted skyrmion lattices are identified directly with polarized neutron reflectometry and confirmed by magnetoresistance measurements. Our results demonstrate an exciting platform to explore room-temperature ground-state skyrmion lattices.

Neuroscientists simulate tiny part of rat brain in a supercomputer

A virtual brain slice in the rat neocortex (credit: Henry Markram et al./Cell)

The Blue Brain Project, the simulation core of the European Human Brain Project, released today (Oct. 8) a draft digital reconstruction of the neocortical microcircuitry of the rat brain.

The international team, led by Henry Markram of École Polytechnique Fédérale De Lausanne (EPFL) and funded in part by the Swiss government, completed a first-draft computer reconstruction of a piece of the rat-brain neocortex — about a third of a cubic millimeter of brain tissue containing about 30,000 neurons connected by nearly 40 million synapses.

The electrical behavior of the virtual brain tissue was simulated on supercomputers and found to match the behavior observed in a number of experiments on the brain. Further simulations revealed novel insights into the functioning of the neocortex.

The simulation reproduced a range of previous observations made in experiments on the brain, validating its biological accuracy and providing new insights into the functioning of the neocortex. The project has published the full set of experimental data and the digital reconstruction, in a public web portal, allowing other researchers to use them.


EPFL, Blue Brain Project, Human Brain Project | Reconstruction and Simulation of Neocortical Microcircuitry

A long-awaited open-access paper (available online until Oct. 22) describing the digital reconstruction was published today by the journal Cell. The reconstruction represents the culmination of 20 years of biological experimentation that generated the core dataset, and 10 years of computational science work that developed the algorithms and built the software ecosystem required to digitally reconstruct and simulate the tissue.

Some scientists see the 36-page paper as proof that the idea of modeling a brain and all of its components is misguided and a waste of money, Science magazine reports. “There is nothing in it that is striking, except that it was a lot of work,” says Zachary Mainen, a neuroscientist at the Champalimaud Centre for the Unknown in Lisbon.

“The reaction to the paper mirrors a dispute that has divided Europe’s neuroscience community since HBP was picked by the European Commission as a so-called Flagship project eligible for up to €1 billion in funding,” Science notes.  “Last year, hundreds of scientists signed an open letter charging that HBP was badly managed and too narrowly focused scientifically.”

Other scientists question the study’s usefulness. Although the resulting data collection is one of the most comprehensive to date on a part of the brain, it remains far from sufficient to reconstruct a complete map of the microcircuitry, admits Markram. “We can’t and don’t have to measure everything. The brain is a well-ordered structure, so once you begin to understand the order at the microscopic level, you can start to predict much of the missing data.”

While a long way from the whole brain, the ambitious and controversial Blue Brain Project study demonstrates that it is feasible to digitally reconstruct and simulate brain tissue … a first step and a significant contribution to Europe’s Human Brain Project (which Markram founded), according to a statement by EPFL.

The reconstruction: a digital approximation of brain tissue

In silico reconstruction of cellular and synaptic anatomy and physiology (credit: Henry Markram et al./Cell)

The study was a massive effort by 82 scientists and engineers at institutions in Switzerland, Israel, Spain, Hungary, USA, China, Sweden, and the UK. The researchers performed tens of thousands of experiments on neurons and synapses in the neocortex of young rats and catalogued each type of neuron and each type of synapse they found. They identified a series of fundamental rules describing how the neurons are arranged in the microcircuit and how they are connected via synapses.

According to Michael Reimann, a lead author who developed the algorithm used to predict the locations of the nearly 40 million synapses in the microcircuitry, “The algorithm begins by positioning realistic 3D models of neurons in a virtual volume, respecting the measured distribution of different neuron types at different depths. It then detects all locations where the branches of the neurons touch each other — over 600 million.

“It then systematically prunes [deletes] all the touches that do not fit with five biological rules of connectivity. That leaves 37 million touches.These are the locations where we constructed our model synapses.” To model the behavior of synapses, the researchers integrated data from their experiments and data from the literature. “It is big step forward that we can now estimate the ion currents flowing through 37 million synapses by integrating data for only a few of them,” says Srikanth Ramaswamy, a lead author.

Researchers found a close match between connectivity statistics for the digital reconstruction and experimental measurements in biological tissue, which had not been used in the reconstruction, including measurements by researchers outside the project. Javier DeFelipe, a senior author from Universidad Politecnica de Madrid (UPM), confirms that the digital reconstruction compares well with data from powerful electron microscopes, obtained independently at his laboratory.

Idan Segev, a senior author, sees the paper as building on the pioneering work of the Spanish anatomist, Ramon y Cajal from more than 100 years ago. “Ramon y Cajal began drawing every type of neuron in the brain by hand. He even drew in arrows to describe how he thought the information was flowing from one neuron to the next. Today, we are doing what Cajal would be doing with the tools of the day — building a digital representation of the neurons and synapses and simulating the flow of information between neurons on supercomputers. Furthermore, the digitization of the tissue allows the data to be preserved and reused for future generations.”.

The simulations: validation against in vivo experiments

The aim of the study was to create a digital approximation of the tissue. The big test is how the circuit behaves when the interactions between all the neurons are simulated on a supercomputer — an enormous challenge for the project’s engineers and scientists.

As reported by Felix Schürmann, a senior author who leads the team that builds the sofware to run on supercomputers: “Building the digital reconstructions, running the simulations and analyzing the results required a supercomputing infrastructure and a large ecosystem of software … [to] solve the billions of equations needed to simulate each 25 microsecond time-step in the simulation”.

The researchers ran simulations on the virtual tissue that mimicked previous biological experiments on the brain. The digital reconstruction was not designed to reproduce any specific circuit phenomenon, but a variety of experimental findings emerged.*

Simulations also helped the team to develop new insights that have not yet been possible in biological experiments. For example, the Cell paper describes how they uncovered an unexpected yet major role for calcium in some of the brain’s most fundamental behaviors. Eilif Muller, a lead author, describes how early simulations produced bursts of synchronized neural activity, similar to the activity found in sleep and very different from the asynchronous activity observed in awake animals. “When we decreased the calcium levels to match those found in awake animals and introduced the effect that this has on the synapses, the circuit behaved asynchronously, like neural circuits in awake animals.” Simulations integrating these biological data revealed a fundamental role of calcium in controlling brain states.

The researchers found that there are, in fact, many cellular and synaptic mechanisms that can shift the circuit from one state of activity to another. This suggests that the circuit can change its state to enable different computing capabilities. If this is so, it could lead to new ways of studying information processing and memory mechanisms in normal brain states, such as wakefulness, drowsiness, and sleep and some of the mechanisms in abnormal states such as epilepsy, and potentially other brain disorders.

What’s next

Now that the Blue Brain team has published the experimental results and the digital reconstruction, other scientists will be able to use the experimental data and reconstruction to test other theories of brain function.

“The reconstruction is a first draft, it is not complete and it is not yet a perfect digital replica of the biological tissue,” says Markram. In fact, the current version explicitly leaves out many important aspects of the brain, such as glia, blood vessels, gap-junctions, plasticity, and neuromodulation. According to Sean Hill, a senior author: “The job of reconstructing and simulating the brain is a large-scale collaborative one, and the work has only just begun. The Human Brain Project represents the kind of collaboration that is required.”

* One such simulation examined how different types of neuron respond when the fibers coming into the neocortex is stimulated by incoming fibers, analogous to touching the skin. The researchers found that the responses of the different types of neurons in the digital reconstruction were very similar to those that had been previously observed in the laboratory. They then searched the reconstruction for exquisitely timed sequences of activity (“triplets”) in groups of three neurons, that other researchers had previously observed in the brain.

They found that the reconstruction did indeed express the triplets and also made a new discovery: the triplets only occur when the circuit is in a very special state of activity. They further tested whether the digital reconstruction could reproduce the recent discovery that some neurons in the brain are closely synchronized with neighboring neurons (“chorists”), while others operate independently from the group (“soloists”). The researchers found the chorists and soloists, and were also able to pinpoint the types of neurons involved and propose cellular and synaptic mechanisms for these behaviors.


Abstract of Reconstruction and Simulation of Neocortical Microcircuitry

We present a first-draft digital reconstruction of the microcircuitry of somatosensory cortex of juvenile rat. The reconstruction uses cellular and synaptic organizing principles to algorithmically reconstruct detailed anatomy and physiology from sparse experimental data. An objective anatomical method defines a neocortical volume of 0.29 ± 0.01 mm3 containing ∼31,000 neurons, and patch-clamp studies identify 55 layer-specific morphological and 207 morpho-electrical neuron subtypes. When digitally reconstructed neurons are positioned in the volume and synapse formation is restricted to biological bouton densities and numbers of synapses per connection, their overlapping arbors form ∼8 million connections with ∼37 million synapses. Simulations reproduce an array of in vitro and in vivo experiments without parameter tuning. Additionally, we find a spectrum of network states with a sharp transition from synchronous to asynchronous activity, modulated by physiological mechanisms. The spectrum of network states, dynamically reconfigured around this transition, supports diverse information processing strategies.