Gartner identifies the top 10 strategic IT technology trends for 2016

Top 10 strategic trends 2016 (credit: Gartner, Inc.)

At the Gartner Symposium/ITxpo today (Oct. 8), Gartner, Inc. highlighted the top 10 technology trends that will be strategic for most organizations in 2016 and will shape digital business opportunities through 2020.

The Device Mesh

The device mesh refers to how people access applications and information or interact with people, social communities, governments and businesses. It includes mobile devices, wearable, consumer and home electronic devices, automotive devices, and environmental devices, such as sensors in the Internet of Things (IoT), allowing for greater cooperative interaction between devices.

Ambient User Experience

The device mesh creates the foundation for a new continuous and ambient user experience. Immersive environments delivering augmented and virtual reality hold significant potential but are only one aspect of the experience. The ambient user experience preserves continuity across boundaries of device mesh, time and space. The experience seamlessly flows across a shifting set of devices — such as sensors, cars, and even factories — and interaction channels blending physical, virtual and electronic environment as the user moves from one place to another.

3D Printing Materials

Advances in 3D printing will drive user demand and a compound annual growth rate of 64.1 percent for enterprise 3D-printer shipments through 2019, which will require a rethinking of assembly line and supply chain processes to exploit 3D printing.

Information of Everything

Everything in the digital mesh produces, uses and transmits information, including sensory and contextual information. “Information of everything” addresses this influx with strategies and technologies to link data from all these different data sources. Advances in semantic tools such as graph databases as well as other emerging data classification and information analysis techniques will bring meaning to the often chaotic deluge of information.

Advanced Machine Learning

In advanced machine learning, deep neural nets (DNNs) move beyond classic computing and information management to create systems that can autonomously learn to perceive the world on their own, making it possible to address key challenges related to the information of everything trend.

DNNs (an advanced form of machine learning particularly applicable to large, complex datasets) is what makes smart machines appear “intelligent.” DNNs enable hardware- or software-based machines to learn for themselves all the features in their environment, from the finest details to broad sweeping abstract classes of content. This area is evolving quickly, and organizations must assess how they can apply these technologies to gain competitive advantage.

Autonomous Agents and Things

Machine learning gives rise to a spectrum of smart machine implementations — including robots, autonomous vehicles, virtual personal assistants (VPAs) and smart advisors — that act in an autonomous (or at least semiautonomous) manner.

VPAs such as Google Now, Microsoft’s Cortana, and Apple’s Siri are becoming smarter and are precursors to autonomous agents. The emerging notion of assistance feeds into the ambient user experience in which an autonomous agent becomes the main user interface. Instead of interacting with menus, forms and buttons on a smartphone, the user speaks to an app, which is really an intelligent agent.

Adaptive Security Architecture

The complexities of digital business and the algorithmic economy combined with an emerging “hacker industry” significantly increase the threat surface for an organization. Relying on perimeter defense and rule-based security is inadequate, especially as organizations exploit more cloud-based services and open APIs for customers and partners to integrate with their systems. IT leaders must focus on detecting and responding to threats, as well as more traditional blocking and other measures to prevent attacks. Application self-protection, as well as user and entity behavior analytics, will help fulfill the adaptive security architecture.

Advanced System Architecture

The digital mesh and smart machines require intense computing architecture demands to make them viable for organizations. Providing this required boost are high-powered and ultraefficient neuromorphic (brain-like) architectures fueled by GPUs (graphic processing units) and field-programmable gate arrays (FPGAs). There are significant gains to this architecture, such as being able to run at speeds of greater than a teraflop with high-energy efficiency.

Mesh App and Service Architecture

Monolithic, linear application designs (e.g., the three-tier architecture) are giving way to a more loosely coupled integrative approach: the apps and services architecture. Enabled by software-defined application services, this new approach enables Web-scale performance, flexibility and agility. Microservice architecture is an emerging pattern for building distributed applications that support agile delivery and scalable deployment, both on-premises and in the cloud. Containers are emerging as a critical technology for enabling agile development and microservice architectures. Bringing mobile and IoT elements into the app and service architecture creates a comprehensive model to address back-end cloud scalability and front-end device mesh experiences. Application teams must create new modern architectures to deliver agile, flexible and dynamic cloud-based applications that span the digital mesh.

Internet of Things Platforms

IoT platforms complement the mesh app and service architecture. The management, security, integration and other technologies and standards of the IoT platform are the base set of capabilities for building, managing, and securing elements in the IoT. The IoT is an integral part of the digital mesh and ambient user experience and the emerging and dynamic world of IoT platforms is what makes them possible.

* Gartner defines a strategic technology trend as one with the potential for significant impact on the organization. Factors that denote significant impact include a high potential for disruption to the business, end users or IT, the need for a major investment, or the risk of being late to adopt. These technologies impact the organization’s long-term plans, programs and initiatives.

Smaller silver nanoparticles more likely to be absorbed by aquatic life, UCLA study finds

Researchers studied zebrafish because they have some genetic similarities to humans and their embryos and larvae are transparent, which makes them easier to observe (credit: Tunde Akinloye/CNSI)

A study led by UCLA scientists has found that smaller silver nanoparticles entered fish’s bodies more deeply and persisted longer than larger silver nanoparticles or fluid silver nitrate.

More than 2,000 consumer products today contain nanoparticles — particles so small that they are measured in billionths of a meter. Manufacturers use nanoparticles to help sunscreen work better against the sun’s rays and to make athletic apparel better at wicking moisture away from the body, among many other purposes.

Of those products, 462 contain nanoparticles made from silver, which are used for their ability to kill bacteria. But that benefit might be coming at a cost to the environment. In many cases, simply using the products as-intended causes silver nanoparticles to wind up in rivers and other bodies of water, where they can be ingested by fish and interact with other marine life.

The new study by the University of California Center for Environmental Implications of Nanotechnology, published online in the journal ACS Nano, was intend to begin addressing the question: to what extent do organisms retain those particles and what effects might they have?

Absorption of silver nanoparticles by fish

According to Andre Nel, director of UCLA’s Center for Environmental Implications of Nanotechnology (CEIN) and associate director of the California NanoSystems Institute at UCLA, it is not yet known whether silver nanoparticles are harmful, but the research team wanted to first identify whether they were even being absorbed by fish.

Deposits of 20-nanometer silver nanoparticles in zebrafish gill filaments (outlined in red) (credit: Olivia J. Osborne et al./ACS Nano)

In the study, researchers placed zebrafish in water that contained fluid silver nitrate and two sizes of silver nanoparticles — some measuring 20 nanometers in diameter and others 110 nanometers. The researchers found that the two sizes of particles affected the fish very differently.

The researchers used zebrafish in the study because they have some genetic similarities to humans, and their embryos and larvae are transparent (which makes them easier to observe). In addition, they tend to absorb chemicals and other substances from water.

The team focused its research on the fish’s gills and intestines because they are the organs most susceptible to silver exposure.

The gills showed a significantly higher silver content for the 20-nanometer than the 110-nanometer particles, while the values were more similar in the intestines; both sizes of the silver particles were retained in the intestines even after the fish spent seven days in clean water.

The experiment was one of the most comprehensive in vivo studies to date on silver nanoparticles, as well as the first to compare silver nanoparticle toxicity by extent of organ penetration and duration with different-sized particles, and the first to demonstrate a mechanism for the differences.

Osborne said the results seem to indicate that smaller particles penetrated deeper into the fishes’ organs and stayed there longer because they dissolve faster than the larger particles and are more readily absorbed by the fish.

Nel said the team’s next step is to determine whether silver particles are potentially harmful. “Our research will continue in earnest to determine what the long-term effects of this exposure can be,” he said.

The research was supported by the National Science Foundation and the Environmental Protection Agency.


Abstract of Organ-Specific and Size-Dependent Ag Nanoparticle Toxicity in Gills and Intestines of Adult Zebrafish

We studied adult zebrafish to determine whether the size of 20 and 110 nm citrate-coated silver nanoparticles (AgC NPs) differentially impact the gills and intestines, known target organs for Ag toxicity in fish. Following exposure for 4 h, 4 days, or 4 days plus a 7 day depuration period, we obtained different toxicokinetic profiles for different particle sizes, as determined by Ag content of the tissues. Ionic AgNO3 served as a positive control. The gills showed a significantly higher Ag content for the 20 nm particles at 4 h and 4 days than the 110 nm particles, while the values were more similar in the intestines. Both particle types were retained in the intestines even after depuration. These toxicokinetics were accompanied by striking size-dependent differences in the ultrastructural features and histopathology in the target organs in response to the particulates. Ag staining of the gills and intestines confirmed prominent Ag deposition in the basolateral membranes for the 20 nm but not for the 110 nm particles. Furthermore, it was possible to link the site of tissue deposition to disruption of the Na+/K+ ion channel, which is also localized to the basolateral membrane. This was confirmed by a reduction in ATPase activity and immunohistochemical detection of the α subunit of this channel in both target organs, with the 20 nm particles causing significantly higher inhibition and disruption than the larger size particles or AgNO3. These results demonstrate the importance of particle size in determining the hazardous impact of AgNPs in the gills and intestines of adult zebrafish.

Detecting infectious and autoimmune antibodies with a DNA nanomachine

A nanoscale DNA “machine,” shown in this illustration bound to an antibody (yellow), rapidly lights up when it recognizes specific target antibodies (credit: Marco Tripodi)

An international team of scientists has developed a nanomachine using synthetic DNA for rapid, sensitive, low-cost diagnosis of infectious and auto-immune diseases, including HIV, at the point of care. It aims to replace the current slow, cumbersome, and expensive current process of detecting the protein antibodies used for diagnosis.

An antibody causes a structural change (or switch) in the device, which generates a light signal. The sensor does not need to be chemically activated and is rapid — acting within five minutes — enabling the targeted antibodies to be easily detected, even in complex clinical samples such as blood serum.

The antibody-targeting sensor is composed of a light-emitting fluorophore (F) and quencher (green circle) connected to two single-stranded DNA tails joined to the appropriate recognition element (red hexagons) for a given test. When a target antibody is detected by the two recognition elements, they open the stem, activating the fluorophore. (credit: S. Ranallo et al./Angew. Chem. Int. Ed.)

The research is described in the October issue of the journal Angewandte Chemie.

“One of the advantages of our approach is that it is highly versatile,” said Prof. Francesco Ricci, of the University of Rome, Tor Vergata, senior co-author of the study. “This DNA nanomachine can be in fact custom-modified so that it can detect a huge range of antibodies; this makes our platform adaptable for many different diseases.”

“Our modular platform provides significant advantages over existing methods for the detection of antibodies,” added Prof. Vallée-Bélisle of the University of Montreal, the other senior co-author of the paper. “It is rapid, does not require reagent chemicals, and may prove to be useful in a range of different applications such as point-of-care diagnostics and bioimaging.”

The researchers plan to allow the light-emitting signal to be detected by a mobile phone.

A University of California, Santa Barbara scientist was also involved in the research.


Abstract of A Modular, DNA-Based Beacon for Single-Step Fluorescence Detection of Antibodies and Other Proteins

A versatile platform for the one-step fluorescence detection of both monovalent and multivalent proteins has been developed. This system is based on a conformation-switching stem–loop DNA scaffold that presents a small-molecule, polypeptide, or nucleic-acid recognition element on each of its two stem strands. The steric strain associated with the binding of one (multivalent) or two (monovalent) target molecules to these elements opens the stem, enhancing the emission of an attached fluorophore/quencher pair. The sensors respond rapidly (<10 min) and selectively, enabling the facile detection of specific proteins even in complex samples, such as blood serum. The versatility of the platform was demonstrated by detecting five bivalent proteins (four antibodies and the chemokine platelet-derived growth factor) and two monovalent proteins (a Fab fragment and the transcription factor TBP) with low nanomolar detection limits and no detectable cross-reactivity.

Sleep may strengthen long-term memories in the immune system

A model of memory formation in the central nervous system (upper section) and the immune system (lower section) (credit: Westermann et al./Trends in Neurosciences 2015)

Deep (slow-wave*) sleep, which helps retain memories in the brain, may also strengthen immunological memories of encountered pathogens, German and Dutch neuroscientists propose in an Opinion article published September 29 in Trends in Neurosciences.

The immune system “remembers” an encounter with a bacteria or virus by collecting fragments from the microbe to create memory T cells, which last for months or years and help the body recognize a previous infection and quickly respond. These memory T cells appear to abstract “gist information” about the pathogens, allowing memory T cells to detect new pathogens that are similar, but not identical, to previously encountered bacteria or viruses.

Studies in humans have shown that long-term increases in memory T cells are associated with deep slow-wave sleep on the nights after vaccination. Taken together, the findings support the view that slow-wave sleep contributes to the formation of long-term memories of abstract, generalized information, which leads to adaptive behavioral and immunological responses.

How lack of sleep puts your body at risk

The obvious implication is that sleep deprivation could put your body at risk. “If we didn’t sleep, then the immune system might focus on the wrong parts of the pathogen,” says senior author Jan Born of the University of Tuebingen.

“For example, many viruses can easily mutate some parts of their proteins to escape from immune responses. If too few antigen-recognizing cells [the cells that present the fragments to T cells] are available, then they might all be needed to fight off the pathogen. In addition to this, there is evidence that the hormones released during sleep benefit the crosstalk between antigen-presenting and antigen-recognizing cells, and some of these important hormones could be lacking without sleep.”

Born says that future research should examine what information is selected during sleep for storage in long-term memory, and how this selection is achieved. This research could have important clinical implications.

“In order to design effective vaccines against HIV, malaria, and tuberculosis, which are based on immunological memory, the correct memory model must be available,” Born says. “It is our hope that by comparing the concepts of neuronal and immunological memory, a model of immunological memory can be developed which integrates the available experimental data and serves as a helpful basis for vaccine development.”

* Slow wave sleep (SWS) is the constructive phase of sleep for recuperation of the mind-body system in which it rebuilds itself after each day. Substances that have been ingested into the body while an organism is awake are synthesized into complex proteins of living tissue. Growth hormones are also secreted to facilitate the healing of muscles as well as repairing damage to any tissues. Lastly, glial cells within the brain are restored with sugars to provide energy for the brain. Longer periods of SWS occur in the first part of the night, primarily in the first two sleep cycles (roughly three hours). — Wikipedia


Abstract of System Consolidation during Sleep — A Common Principle Underlying Psychological and Immunological Memory Formation

Sleep benefits the consolidation of psychological memory, and there are hints that sleep likewise supports immunological memory formation. Comparing psychological and immunological domains, we make the case for active system consolidation that is similarly established in both domains and partly conveyed by the same sleep-associated processes. In the psychological domain, neuronal reactivation of declarative memory during slow-wave sleep (SWS) promotes the redistribution of representations initially stored in hippocampal circuitry to extra-hippocampal circuitry for long-term storage. In the immunological domain, SWS seems to favor the redistribution of antigenic memories initially held by antigen-presenting cells, to persisting T cells serving as a long-term store. Because storage capacities are limited in both systems, system consolidation presumably reduces information by abstracting ‘gist’ for long-term storage.

DARPA selects research teams for its ElectRx neuron-sensing/stimulation program

DARPA announced Monday (Oct. 5, 2015) that it has selected seven teams of researchers to begin work on a radical new approach to healing called Electrical Prescriptions (ElectRx). It would involve a system that stimulates peripheral nerves to modulate functions in the brain, spinal cord, and internal organs, according to program manager Doug Weber.

DARPA envisions a closed-loop system aimed at monitoring and treating conditions such as chronic pain, inflammatory disease, post-traumatic stress, and other illnesses that may not be responsive to traditional treatments, using optical, acoustic, electromagnetic, or engineered biology strategies to achieve precise targeting, possibly at single-axon resolution.

Pacemakers for other organs

The oldest and simplest example of this concept is the cardiac pacemaker, which uses brief pulses of electricity to stimulate the heart to beat at a healthy rate. DARPA aims to extend this concept to other organs, like the spleen, and treat inflammatory diseases such as rheumatoid arthritis.

Fighting inflammation may also provide new treatments for depression, which growing evidence suggests might be caused in part by excess levels of inflammatory biomolecules. Peripheral nerve stimulation may also be used to regulate production of neurochemicals that regulate learning and memory in the brain, offering new treatments for post-traumatic stress and other mental health disorders.

In phase 1, the ElectRx program will focus on fundamental studies to map the neural circuits governing the physiology of diseases of interest to DARPA, and also on preliminary development of novel, minimally invasive neural and bio-interface technologies with unprecedented levels of precision, targeting, and scale.

The teams

The seven teams include a mix of first-time and prior DARPA performers.

For example, an MIT team led by Polina Anikeeva will aim to advance its research in stimulating brain tissue using external magnetic fields and injected magnetic nanoparticles to treat neurological diseases such as Parkinson’s disease, replacing surgically implanted electrodes, as KurzweilAI reported in March. When exposed to a low-frequency (100 kHz — 1 MHz) external alternating magnetic field — which can penetrate deep inside biological tissues — these nanoparticles rapidly heat up and trigger heat-sensitive capsaicin (the “hot” in peppers) receptors to stimulate neurons.


MIT | Wireless brain stimulation

The other teams are:

  • Circuit Therapeutics (Menlo Park, Calif.), a start-up co-founded by Stanford University scientists Karl Deisseroth and Scott Delp, plans to further develop its experimental optogenetic methods for treating neuropathic pain, building toward testing in animal models first.
  • A team at Columbia University (New York), led by Elisa Konofagou, will pursue fundamental science to support the use of non-invasive, targeted ultrasound for neuromodulation. The team aims to elucidate the underlying mechanisms that may make ultrasound an option for chronic intervention, including activation and inhibition of nerves.
  • A team at the Florey Institute of Neuroscience and Mental Health (Parkville, Australia), led by John Furness, will seek to map the nerve pathways that underlie intestinal inflammation, with a focus on determining the correlations between animal models and human neural circuitry. They will also explore the use of neurostimulation technologies based on the cochlear implant — developed by Cochlear, Inc. to treat hearing loss but adapted to modulate activity of the vagus nerve in response to biofeedback signals — as a possible treatment for inflammatory bowel disease.
  • A team at the Johns Hopkins University (Baltimore), led by Jiande Chen, aims to explore the root mechanisms of inflammatory bowel disease and the impact of sacral nerve stimulation on its progression. The team will apply a first-of-its-kind approach to visualize intestinal responses to neuromodulation in animal models.
  • A team at Purdue University (West Lafayette, Ind.), led by Pedro Irazoqui, will leverage an existing collaboration with Cyberonics to study inflammation of the gastrointestinal tract and its responsiveness to vagal nerve stimulation through the neck. Validation of the mechanistic insights that emerge from the effort will take place in pre-clinical models in which novel neuromodulation devices will be applied to reduce inflammation in a feedback-controlled manner. Later stages of the effort could advance the design of clinical neuromodulation devices.
  • A team at the University of Texas, Dallas, led by Robert Rennaker and Michael Kilgard, will examine the use of vagal nerve stimulation to induce neural plasticity for the treatment of post-traumatic stress. As envisioned, stimulation could enhance learned behavioral responses that reduce fear and anxiety when presented with traumatic cues. Dr. Rennaker is a U.S. Marine Corps veteran who served in Liberia, Kuwait and Yugoslavia.

A fast cell sorter shrinks to cell phone size

An artist’s conception of an acoustic cell sorter is the cover image on the current issue of Lab on a Chip (credit: Huang Group/Penn State)

Penn State researchers have developed a new lab-on-a-chip cell sorting device based on acoustic waves that is capable of the kind of high sorting throughput necessary to compete with commercial fluorescence activated cell sorters, described in the cover story in the current issue of the British journal Lab on a Chip.

Commercial fluorescence activated cell sorters have been highly successful in the past 40 years at rapidly and accurately aiding medical diagnosis and biological studies, but they are bulky and too expensive ($200,000 -$1,000,000) for many labs or doctors’ offices.

“The current benchtop cell sorters are too expensive, too unsafe, and too high-maintenance. More importantly, they have very low biocompatibility. The cell-sorting process can reduce cell viability and functions by 30–99 percent for many fragile or sensitive cells such as neurons, stem cells, liver cells and sperm cells,” said Tony Jun Huang, Penn State professor of engineering science and mechanics and the paper’s corresponding author. “We are developing an acoustic cell sorter that has the potential to address all these problems.”

High-speed sorting

Schematic of the standing surface acoustic waves (SSAWs)-based sorter excited by focused interdigital transducers (FIDTs) (credit: Liqiang Ren et al./Lab on a Chip)

Microfluidic cell sorters are a promising new tool for single cell sequencing, rare cell isolation, and drug screening. However, many of them operate at only a few hundred cells per second, far too slow to compete with commercial devices that operate on the order of tens of thousands of operations per second. The Penn State system can sort about 3,000 cells per second, with the potential to sort more than 13,000 cells per second.

The speed is generated by using focused transducers to create standing surface acoustic waves (SSAWs). When the waves are not focused, the acoustic field spreads out, slowing the sorting process. The narrow field allows the sorting to take place at high speed while gently manipulating individual cells.

“Our high-throughput acoustic cell sorter is expected to maintain cell integrity by preserving not only high viability, but also other cellular features such as gene expression, post translational modification, and cell function,” said Huang.

“The acoustic power intensity and frequency used in our device are in a similar range as those used in ultrasonic imaging, which has proven to be extremely safe for health monitoring, even during various stages of pregnancy. With the gentle nature of low-power acoustic waves, I believe that our device has the best chance in preserving cell integrity, even for fragile, sensitive cells. Such an ability is important for numerous applications such as animal reproduction, cell immunotherapy and biological research.”

Because the device is built on a lab-on-a chip system, it is both compact and inexpensive — about the size and cost of a cell phone in its current configuration. With the addition of optics, the device would still be only as large as a book.

The acoustic cell sorter was fabricated in Penn State’s Nanofabrication Laboratory using standard lithography techniques and co-developed with Ascent Bio-Nano Technologies and the National Heart, Lung, and Blood Institute, a part of the National Institutes of Health.

In future work, the researchers plan to integrate their acoustic cell-sorting unit with an optical cell-detecting unit, with the goal of increasing throughput to 10,000 events per second.


Abstract of A high-throughput acoustic cell sorter

Acoustic-based fluorescence activated cell sorters (FACS) have drawn increased attention in recent years due to their versatility, high biocompatibility, high controllability, and simple design. However, the sorting throughput for existing acoustic cell sorters is far from optimum for practical applications. Here we report a high-throughput cell sorting method based on standing surface acoustic waves (SSAWs). We utilized a pair of focused interdigital transducers (FIDTs) to generate SSAW with high resolution and high energy efficiency. As a result, the sorting throughput is improved significantly from conventional acoustic-based cell sorting methods. We demonstrated the successful sorting of 10 μm polystyrene particles with a minimum actuation time of 72 μs, which translates to a potential sorting rate of more than 13800 events per second. Without using a cell-detection unit, we were able to demonstrate an actual sorting throughput of 3300 events per second. Our sorting method can be conveniently integrated with upstream detection units, and it represents an important development towards a functional acoustic-based FACS system.

How the brain’s wiring leads to cognitive control

From weighted brain networks (a), researchers estimate control points (b) whose large-scale regional activity can move the brain into new trajectories that traverse diverse cognitive functions (credit: Shi Gu et al./Nature Communications)

How does the brain determine which direction its thoughts travel? Looking for the mechanisms behind cognitive control of thought, researchers at the University of Pennsylvania, University of California, Riverside and Santa Barbara and United States Army Research Laboratory have used brain scans to shed new light on this question.

By using structural imaging techniques to convert brain scans into “wiring diagrams” of connections between brain regions, the researchers used the structure of these neural networks to reveal the fundamental rules that govern which parts of the brain are most able to exert “cognitive control” over thoughts and actions.

The work, published in an open-access paper in Nature Communications, weds cutting-edge neuroscience with the emerging field of network science, which is often used to study social systems. It applies control theory, a field traditionally used to study electrical and mechanical systems, to show that being on the “outskirts” of the brain is necessary for the frontal cortex to dynamically control the direction of thoughts and goal-directed behavior.

This fundamental understanding of how the brain controls its activity could help lead to better interventions for medical conditions associated with reduced cognitive control, such as autism, schizophrenia or dementia.

How the front cortex controls thoughts

According to Danielle Bassett, the Skirkanich Assistant Professor of Innovation in Penn’s School of Engineering and Applied Science and senior author on the study, “our results suggest that the human brain resembles a flock of birds. The flock comes to a consensus about which way to fly based on how close the birds are to one another and in what formation. Birds that fly at specific places in the flock can drive changes in the flock’s direction, being leaders in a so-called multi-agent system.

“Similarly, particular regions of your brain are predisposed to control your thoughts based on where they lie in relation to other regions.”

Cognitive psychologists and neuroscientists have long known that the frontal cortex is heavily involved in cognitive control. It is most active in experimental subjects asked to do tasks that require executive function, and damage to that region of the brain, through disease or injury, often results in loss of that function.

Applying control theory

The researchers were interested in developing a more fundamental understanding of how that region of the brain interacts with others to allow for executive function. Starting with detailed brain scans that show how neurons are physically connected to one another with one-millimeter precision, the scientists used a mathematical technique drawn from control theory in engineering.

By applying control theory equations to the “wiring diagrams” generated from brain scans, the researchers showed that the geographical and functional differences between regions of the brain are linked, with principles akin to large-scale dynamical network systems, such as power systems and robotic networks,

While the analysis cannot say whether the frontal cortex’s location or its role evolved first, it suggests that part of the frontal cortex’s ability to control executive function depends on its distance from other parts of the brain network.

Regions that are most interconnected, and therefore more internal to the network, are very good at moving the brain into nearby states, like from writing someone an email to talking to them on the phone. What’s particularly interesting is, if we look at where those inner nodes are, they’re all in ‘default mode’ regions, which are the regions that are active when you are resting. This makes sense, because if you were engineering an optimal system, you would want to put its baseline somewhere where it can get to most of the places it has to go pretty easily.”

This type of holistic understanding of the relationship between brain regions’ location and their roles is necessary for tailoring better treatments for people who have lost executive function due to disease or injury.

“We’re very interested in controlling brain networks with techniques like optogenetics, transcranial magnetic or direct-current stimulation, deep brain stimulation or even neurofeedback,” Bassett said, “but the problem has been that there is little theoretical basis to determine how these stimulations affect the dynamics of the whole brain. In most cases, stimulation is applied via trial and error. This research helps to build up an understanding of the impact of stimulation in one region on cognition as a whole.”

Future research will test whether “wiring” differences between people predict their performance on cognitive tasks. It will also underpin work on therapeutic and adaptive technologies that capitalize on brain networks’ unique advantages over their computerized counterparts.


Abstract of Controllability of structural brain networks

Cognitive function is driven by dynamic interactions between large-scale neural circuits or networks, enabling behaviour. However, fundamental principles constraining these dynamic network processes have remained elusive. Here we use tools from control and network theories to offer a mechanistic explanation for how the brain moves between cognitive states drawn from the network organization of white matter microstructure. Our results suggest that densely connected areas, particularly in the default mode system, facilitate the movement of the brain to many easily reachable states. Weakly connected areas, particularly in cognitive control systems, facilitate the movement of the brain to difficult-to-reach states. Areas located on the boundary between network communities, particularly in attentional control systems, facilitate the integration or segregation of diverse cognitive systems. Our results suggest that structural network differences between cognitive circuits dictate their distinct roles in controlling trajectories of brain network function.

First two-qubit logic gate built in silicon

Artist’s impression of the two-qubit logic gate device developed at UNSW. Each of the two electron qubits (red and blue) has a spin, or magnetic field, indicated by the arrow directions. Metal electrodes on the surface are used to manipulate the qubits, which interact to create an entangled quantum state. (credit: Tony Melov/UNSW)

University of New South Wales (UNSW) and Keio University engineers have built the first quantum logic gate in silicon, making calculations between two qubits* of information possible and clearing the final hurdle to making silicon quantum computers a reality.

The significant advance appears today (Oct. 5, 2015) in the journal Nature.

“What we have is a game changer,” said team leader Andrew Dzurak, Scientia Professor and Director of the Australian National Fabrication Facility at UNSW. “Because we use essentially the same device technology as existing computer chips, we believe it will be much easier to manufacture a full-scale processor chip than for any of the leading designs, which rely on more exotic technologies.”


University of New South Wales

“If quantum computers are to become a reality, the ability to conduct one- and two-qubits calculations are essential,” said Dzurak, who jointly led the team in 2012 that demonstrated the first ever silicon qubit, also reported in Nature.

Until now, using silicon, it had not been possible to make two quantum bits “talk” to each other and thereby create a logic gate. The new result means that all of the physical building blocks for a silicon-based quantum computer have now been successfully constructed, allowing engineers to finally begin the task of designing and building a functioning quantum computer, the researchers say.

Dzurak noted that the team had recently “patented a design for a full-scale quantum computer chip that would allow for millions of our qubits … using standard industrial manufacturing techniques to build the world’s first quantum processor chip. … That has major implications for the finance, security, and healthcare sectors.”

He said that a key next step for the project is to identify the right industry partners to work with to manufacture the full-scale quantum processor chip.

Dzurak’s research is supported by the Australian Research Council via the Centre of Excellence for Quantum Computation and Communication Technology, the U.S. Army Research Office, the State Government of New South Wales in Australia, the Commonwealth Bank of Australia, and the University of New South Wales. Veldhorst acknowledges support from the Netherlands Organisation for Scientific Research. The quantum logic devices were constructed at the Australian National Fabrication Facility, which is supported by the federal government’s National Collaborative Research Infrastructure Strategy (NCRIS).

* In classical computers, data is rendered as binary bits, which are always in one of two states: 0 or 1. A quantum bit (or ‘qubit’) can exist in both of these states at once, a condition known as a superposition. A qubit operation exploits this quantum weirdness by allowing many computations to be performed in parallel (a two-qubit system performs the operation on 4 values, a three-qubit system on 8, and so on).


Abstract of A two-qubit logic gate in silicon

Quantum computation requires qubits that can be coupled in a scalable manner, together with universal and high-fidelity one- and two-qubit logic gates. Many physical realizations of qubits exist, including single photons, trapped ions, superconducting circuits, single defects or atoms in diamond and silicon, and semiconductor quantum dots, with single-qubit fidelities that exceed the stringent thresholds required for fault-tolerant quantum computing. Despite this, high-fidelity two-qubit gates in the solid state that can be manufactured using standard lithographic techniques have so far been limited to superconducting qubits, owing to the difficulties of coupling qubits and dephasing in semiconductor systems. Here we present a two-qubit logic gate, which uses single spins in isotopically enriched silicon and is realized by performing single- and two-qubit operations in a quantum dot system using the exchange interaction, as envisaged in the Loss–DiVincenzo proposal. We realize CNOT gates via controlled-phase operations combined with single-qubit operations. Direct gate-voltage control provides single-qubit addressability, together with a switchable exchange interaction that is used in the two-qubit controlled-phase gate. By independently reading out both qubits, we measure clear anticorrelations in the two-spin probabilities of the CNOT gate.

Fusion reactors ‘economically viable’ in a few decades, say experts

An illustration of a tokamak with plasma (credit: ITER Organization)

Fusion reactors could become an economically viable means of generating electricity within a few decades, replacing conventional nuclear power stations, according to new research at Durham University and Culham Centre for Fusion Energy in Oxfordshire, U.K.

The research, published in the journal Fusion Engineering and Design, builds on earlier findings that a fusion power plant could generate electricity at a price similar to that of a fission plant and identifies new advantages in using new superconductor technology.

Such findings support the possibility that, within a generation or two, fusion reactors could offer an almost unlimited supply of energy without contributing to global warming or producing hazardous products on a significant scale.

No radioactive waste or leaks

Fusion reactors generate electricity by heating plasma to around 100 million degrees centigrade so that hydrogen atoms fuse together, releasing energy. Fission reactors work by splitting atoms at much lower temperatures.

The advantage of fusion reactors is that they create almost no radioactive waste and high-level radioactive material to potentially leak into the environment. That means disasters like Chernobyl or Fukushima are impossible because plasma simply fizzles out if it escapes.

Fusion energy would also not produce weapons-grade products that proliferate nuclear arms. It is fueled by deuterium (“heavy water”), which is extracted from seawater, and tritium, which is created within the reactor, so there is no problem with security of supply either.

A test fusion reactor based a tokamak design, the International Thermonuclear Experimental Reactor (ITER), is about 10 years away from operation in the South of France. Its aim is to prove the scientific and technological feasibility of fusion energy. MIT also plans to create new lower-cost, compact version of a tokamak fusion reactor, also based on improved superconductors, which are required to produce the high current needed to produce magnetic fields.

“Fission, fusion, or fossil fuels are the only practical options for reliable large-scale base-load energy sources,” said Professor Damian Hampshire, of the Centre for Material Physics at Durham University, who led the study. “Calculating the cost of a fusion reactor is complex, given the variations in the cost of raw materials and exchange rates. However, this work is a big step in the right direction” he said.


Abstract of Optimal design of a toroidal field magnet system and cost of electricity implications for a tokamak using high temperature superconductors

The potential for reducing the Cost of Electricity (CoE) by using High Temperature Superconductors (HTS) in the Toroidal Field (TF) coils of a fusion tokamak power plant has been investigated using a new HTS module in the PROCESS systems code. We report the CoE and the design of HTS tokamaks that have been optimised by minimising the major radius of the plasma. Potential future improvements in both the superconducting properties and the structural materials for TF coils operating at 4.8 K and 30 K are considered. Increasing the critical current density by a factor of 10 (with a commensurate reduction in costs kA−1 m−1) results in a CoE 4.4% less than equivalent tokamaks using current low temperature superconductors (LTS). If the yield strength of the TF casing material is increased by 40% to 1400 MPa, the CoE is further reduced by 3.4%. Implementing both improvements and operating the TF coils at 4.8 K leads to CoE of 19.1 (10.1) €cent kW−1 h−1 for a 500 MW (1.5 GW) HTS reactor compared to 20.7 (11.1) €cent kW−1 h−1 for an LTS reactor (2013 costs). Operating the HTS TF coils at 30 K with both improvements, gives a similar CoE for HTS and LTS tokamaks.

How to grow a functional 3-D mini-brain for 25 cents

A bioengineering team at Brown University has grown 3-D “mini-brains” of neurons and supporting cells that form networks and are electrically active. This reconstruction of confocal images of a 21-day-in-vitro 3-D cortical neural spheroid shows β-III-butulin+ neurons in red, GFAP+ astrocytes in green, and DAPI-stained nuclei in blue. (credit: Hoffman-Kim lab/Brown University)

Brown University scientists have developed a “mini-brain” — an accessible method for making a working sphere of central nervous system tissue and providing an inexpensive, easy-to-make 3-D testbed for biomedical research such as drug testing, testing neural tissue transplants, or experimenting with how stem cells work. (No, they don’t think. Yet.)

Mini-brains (cortical neural spheroids) produce electrical signals and form their own synapses. “We think of this as a way to have a better in vitro [lab] model that can maybe reduce animal use,” said graduate student Molly Boutin, co-lead author of a paper on the research in the journal Tissue Engineering: Part C. “A lot of the work that’s done right now is in two-dimensional culture, but this is an alternative that is much more relevant to the in vivo [living] scenario.”

The mini-brains, about a third of a millimeter in diameter, are not the first or the most sophisticated working cell cultures of a central nervous system, the researchers acknowledge, but they require fewer steps to make and they use more readily available materials. Here’s the simple recipe:

  1. First, catch a rodent.
  2. Take a small sample of living tissue, which can make thousands of mini-brains from one rodent brain.
  3. Isolate and concentrate the desired cells with some centrifuge steps.
  4. Use that refined sample to seed the cell culture in medium in an agarose spherical mold.

The spheres of brain tissue in the study begin to form within a day after the cultures are seeded and have formed complex 3-D neural networks within two to three weeks.

Key properties

The researchers were interested in studying aspects of neural cell transplantation, which has been proposed to treat Parkinson’s disease, and in how adult neural stem cells develop. The method they developed yields mini-brains with several important properties:

  • Diverse cell types: The cultures contain both inhibitory and excitatory neurons and several varieties of essential neural support cells called glia.
  • Electrically active: the neurons fire and spike and form synaptic connections, producing complex networks.
  • 3-D: Cells connect and communicate within a realistic geometry, rather than merely across a flat plane as in a 2-D culture.
  • Natural density: Experiments showed that the mini-brains have a density of a few hundred thousand cells per cubic millimeter, which is similar to a natural rodent brain.
  • Physical structure: Cells in the mini-brain produce their own extracellular matrix, producing a tissue with the same mechanical properties (squishiness) as natural tissue. The cultures also don’t rely on foreign materials such as scaffolds of collagen.
  • Longevity: In testing, cultured tissues live for at least a month.
  • Cost: about $0.25.

Study senior author Diane Hoffman-Kim, associate professor of molecular pharmacology, physiology and biotechnology (also associate professor of engineering at Brown and affiliated with the Brown Institute for Brain Science and the Center for Biomedical Engineering) hopes the mini-brains might proliferate to many different labs, including those of researchers who have questions about neural tissue, but not necessarily the degree of neuroscience and cell culture equipment required of other methods.

The National Science Foundation, the National Institutes of Health, the Brown Institute for Brain Science, and the U.S. Department of Education funded the research.


Abstract of 3D Neural Spheroid Culture: An In Vitro Model for Cortical Studies

There is a high demand for in vitro models of the central nervous system to study neurological disorders, injuries, toxicity, and drug-efficacy. Three-dimensional (3D) in vitro models can bridge the gap between traditional 2D culture and animal models because they present an in vivo-like microenvironment in a tailorable experimental platform. Within the expanding variety of sophisticated 3D cultures, scaffold-free, self-assembled spheroid culture avoids the introduction of foreign materials and preserves the native cell populations and extracellular matrix types. In this study, we generated 3D spheroids with primary postnatal rat cortical cells using an accessible, size-controlled, reproducible, and cost-effective method. Neurons and glia formed laminin-containing 3D networks within the spheroids. The neurons were electrically active and formed circuitry via both excitatory and inhibitory synapses. The mechanical properties of the spheroids were in the range of brain tissue. These in vivo-like features of 3D cortical spheroids provide the potential for relevant and translatable investigations of the central nervous system in vitro.