How to animate a digital model of a person from images collected from the Internet

UW researchers have reconstructed 3-D models of celebrities such as Tom Hanks from large Internet photo collections. The models can also be controlled and animated by photos or videos of another person. (credit: University of Washington)

University of Washington researchers have demonstrated that it’s possible for machine learning algorithms to capture the “persona” and create a digital model of a well-photographed person like Tom Hanks from the vast number of images of them available on the Internet. With enough visual data to mine, the algorithms can also animate the digital model of Tom Hanks to deliver speeches that the real actor never performed.

Tom Hanks has appeared in many acting roles over the years, playing young and old, smart and simple. Yet we always recognize him as Tom Hanks. Why? Is it his appearance? His mannerisms? The way he moves? “One answer to what makes Tom Hanks look like Tom Hanks can be demonstrated with a computer system that imitates what Tom Hanks will do,” said lead author Supasorn Suwajanakorn, a UW graduate student in computer science and engineering.

The technology relies on advances in 3-D face reconstruction, tracking, alignment, multi-texture modeling, and puppeteering that have been developed over the last five years by a research group led by UW assistant professor of computer science and engineering Ira Kemelmacher-Shlizerman. The new results will be presented in an open-access paper at the International Conference on Computer Vision in Chile on Dec. 16.


Supasorn Suwajanakorn | What Makes Tom Hanks Look Like Tom Hanks

The team’s latest advances include the ability to transfer expressions and the way a particular person speaks onto the face of someone else — for instance, mapping former president George W. Bush’s mannerisms onto the faces of other politicians and celebrities.

It’s one step toward a grand goal shared by the UW computer vision researchers: creating fully interactive, three-dimensional digital personas from family photo albums and videos, historic collections or other existing visuals.

As virtual and augmented reality technologies develop, they envision using family photographs and videos to create an interactive model of a relative living overseas or a far-away grandparent, rather than simply Skyping in two dimensions.

“You might one day be able to put on a pair of augmented reality glasses and there is a 3-D model of your mother on the couch,” said senior author Kemelmacher-Shlizerman. “Such technology doesn’t exist yet — the display technology is moving forward really fast — but how do you actually re-create your mother in three dimensions?”

One day the reconstruction technology could be taken a step further, researchers say.

“Imagine being able to have a conversation with anyone you can’t actually get to meet in person — LeBron James, Barack Obama, Charlie Chaplin — and interact with them,” said co-author Steve Seitz, UW professor of computer science and engineering. “We’re trying to get there through a series of research steps. One of the true tests is can you have them say things that they didn’t say but it still feels like them? This paper is demonstrating that ability.”


Supasorn Suwajanakorn | George Bush driving crowd

Existing technologies to create detailed three-dimensional holograms or digital movie characters like Benjamin Button often rely on bringing a person into an elaborate studio. They painstakingly capture every angle of the person and the way they move — something that can’t be done in a living room.

Other approaches still require a person to be scanned by a camera to create basic avatars for video games or other virtual environments. But the UW computer vision experts wanted to digitally reconstruct a person based solely on a random collection of existing images.

Learning in the wild

To reconstruct celebrities like Tom Hanks, Barack Obama and Daniel Craig, the machine learning algorithms mined a minimum of 200 Internet images taken over time in various scenarios and poses — a process known as learning “in the wild.”

“We asked, ‘Can you take Internet photos or your personal photo collection and animate a model without having that person interact with a camera?’” said Kemelmacher-Shlizerman. “Over the years we created algorithms that work with this kind of unconstrained data, which is a big deal.”

Suwajanakorn more recently developed techniques to capture expression-dependent textures — small differences that occur when a person smiles or looks puzzled or moves his or her mouth, for example.

By manipulating the lighting conditions across different photographs, he developed a new approach to densely map the differences from one person’s features and expressions onto another person’s face. That breakthrough enables the team to “control” the digital model with a video of another person, and could potentially enable a host of new animation and virtual reality applications.

“How do you map one person’s performance onto someone else’s face without losing their identity?” said Seitz. “That’s one of the more interesting aspects of this work. We’ve shown you can have George Bush’s expressions and mouth and movements, but it still looks like George Clooney.”

Perhaps this could be used to create VR experiences by integrating the animated images in 360-degree sets?


Abstract of What Makes Tom Hanks Look Like Tom Hanks

We reconstruct a controllable model of a person from a large photo collection that captures his or her persona, i.e., physical appearance and behavior. The ability to operate on unstructured photo collections enables modeling a huge number of people, including celebrities and other well photographed people without requiring them to be scanned. Moreover, we show the ability to drive or puppeteer the captured person B using any other video of a different person A. In this scenario, B acts out the role of person A, but retains his/her own personality and character. Our system is based on a novel combination of 3D face reconstruction, tracking, alignment, and multi-texture modeling, applied to the puppeteering problem. We demonstrate convincing results on a large variety of celebrities derived from Internet imagery and video.

Playing 3-D video games can boost memory formation

Video games used in the experiment : screenshot of 2-D Angry Birds (left) and Super Mario 3D World (right) (credit: Gregory D. Clemenson and Craig E.L. Stark/The Journal of Neuroscience)

Playing three-dimensional video games can boost the formation of memories, especially for people who lose memory as they age or suffer from dementia, according to University of California, Irvine (UCI) neurobiologists.

Craig Stark and Dane Clemenson of UCI’s Center for the Neurobiology of Learning & Memory recruited non-gamer college students to play either a video game with a passive, two-dimensional environment (“Angry Birds”) or one with an intricate, 3-D setting (“Super Mario 3D World”) for 30 minutes per day over two weeks.

Before and after the two-week period, the students took memory tests that engaged the brain’s hippocampus, the region associated with complex learning and memory. They were given a series of pictures of everyday objects to study. Then they were shown images of the same objects, new ones, and others that differed slightly from the original items and asked to categorize them.

Students playing the 3-D video game improved their scores on the memory test by about 12 percent, the same amount it normally decreases between the ages of 45 and 70, while the 2-D gamers did not improve.


UC Irvine | 3D Video Games and Memory – UC Irvine

Role of the hippocampus

Recognition of the slightly altered images requires the hippocampus, Stark said, and his earlier research had demonstrated that the ability to do this clearly declines with age. This is a large part of why it’s so difficult to learn new names or remember where you put your keys as you get older.

In previous studies on rodents, postdoctoral scholar Clemenson and others showed that exploring the environment resulted in the growth of new neurons that became entrenched in the hippocampus’ memory circuit and increased neuronal signaling networks. Stark noted some commonalities between the 3-D game the humans played and the environment the rodents explored — qualities lacking in the 2-D game. “First, the 3-D games have … a lot more spatial information in there to explore. Second, they’re much more complex, with a lot more information to learn,” Stark noted.

Stark added that it’s unclear whether the overall amount of information and complexity in the 3-D game or the spatial relationships and exploration is stimulating the hippocampus. “This is one question we’re following up on,” he said.


Myths of “brain training”

“Results from this study add to the existing literature that playing video games may provide meaningful stimulation to the brain. However, it is important to be cautious when generalizing these results to other instances. Recently, 70 neuroscientists from universities and institutions around the world published a letter discussing the myths of “brain training” (Max Planck Institute for Human Development/Stanford Center on Longevity, 2014. A consensus on the brain training industry from the scientific community. Stanford, CA: Stanford Center on Longevity).

“In contrast to typical brain training, typical video games are not created with specific cognitive processes in mind but rather designed to captivate and immerse the user into charactersand adventure. Rather than isolate single brain processes, modern video games can naturally draw on or require many cognitive processes, including visual, spatial, emotional, motivational, attentional, critical thinking, problem solving, and working memory. It’s quite possible that by explicitly avoiding a narrow focus on a single … cognitive domain and by more closely paralleling natural experience, immersive video games may be better suited to provide enriching experiences that translate into functional gains.”

— Gregory D. Clemenson and Craig E.L. Stark. Virtual Environmental Enrichment through Video Games Improves Hippocampal-Associated Memory. The Journal of Neuroscience.


The next step is to determine if environmental enrichment — either through 3-D video games or real-world exploration experiences — can reverse the hippocampal-dependent cognitive deficits present in older populations.

“Can we use this video game approach to help improve hippocampus functioning?” Stark asked. “It’s often suggested that an active, engaged lifestyle can be a real factor in stemming cognitive aging. While we can’t all travel the world on vacation, we can do many other things to keep us cognitively engaged and active. Video games may be a nice, viable route.”

The research is described in a paper published today (Dec. 9) in The Journal of Neuroscience and is funded by a $300,000 Dana Foundation grant.

Skyscraper-style carbon-nanotube chip design ‘boosts electronic performance by factor of a thousand’

A new revolutionary high-rise architecture for computing (credit: Stanford University)

Researchers at Stanford and three other universities are creating a revolutionary new skyscraper-like high-rise architecture for computing based on carbon nanotube materials instead of silicon.

In Rebooting Computing, a special issue (in press) of the IEEE Computer journal, the team describes its new approach as “Nano-Engineered Computing Systems Technology,” or N3XT.

Suburban-style chip layouts create long commutes and regular traffic jams in electronic circuits, wasting time and energy, they note.

N3XT will break data bottlenecks by integrating processors and memory-like floors in a skyscraper and by connecting these components with millions of “vias,” which play the role of tiny electronic elevators.

The N3XT high-rise approach will move more data, much faster, using far less energy, than would be possible using low-rise circuits, according to the researchers.

Stanford researchers including Associate Professor Subhasish Mitra and Professor H.-S. Philip Wong have “assembled a group of top thinkers and advanced technologies to create a platform that can meet the computing demands of the future,” Mitra says.

“When you combine higher speed with lower energy use, N3XT systems outperform conventional approaches by a factor of a thousand,” Wong claims.

Carbon nanotube transistors

Engineers have previously tried to stack silicon chips but with limited success, the researchers suggest. Fabricating a silicon chip requires temperatures close to 1,800 degrees Fahrenheit, making it extremely challenging to build a silicon chip atop another without damaging the first layer. The current approach to what are called 3-D, or stacked, chips is to construct two silicon chips separately, then stack them and connect them with a few thousand wires.

But conventional 3-D silicon chips are still prone to traffic jams and it takes a lot of energy to push data through what are a relatively few connecting wires.

The N3XT team is taking a radically different approach: building layers of processors and memory directly atop one another, connected by millions of vias that can move more data over shorter distances that traditional wire, using less energy, and immersing computation and memory storage into an electronic super-device.

The key is the use of non-silicon materials that can be fabricated at much lower temperatures than silicon, so that processors can be built on top of memory without the new layer damaging the layer below. As in IBM’s recent chip breakthrough (see “Method to replace silicon with carbon nanotubes developed by IBM Research“), N3XT chips are based on carbon nanotube transistors.

Transistors are fundamental units of a computer processor, the tiny on-off switches that create digital zeroes and ones. CNTs are faster and more energy-efficient than silicon processors, and much thinner. Moreover, in the N3XT architecture, they can be fabricated and placed over and below other layers of memory.

Among the N3XT scholars working at this nexus of computation and memory are Christos Kozyrakis and Eric Pop of Stanford, Jeffrey Bokor and Jan Rabaey of the University of California, Berkeley, Igor Markov of the University of Michigan, and Franz Franchetti and Larry Pileggi of Carnegie Mellon University.

New storage technologies 

Team members also envision using data storage technologies that rely on materials other than silicon. This would allow for the new materials to be manufactured on top of CNTs, using low-temperature fabrication processes.

One such data storage technology is called resistive random-access memory, or RRAM (see “‘Memristors’ based on transparent electronics offer technology of the future“). Resistance slows down electrons, creating a zero, while conductivity allows electrons to flow, creating a one. Tiny jolts of electricity switch RRAM memory cells between these two digital states. N3XT team members are also experimenting with a variety of nanoscale magnetic storage materials.

Just as skyscrapers have ventilation systems, N3XT high-rise chip designs incorporate thermal cooling layers. This work, led by Stanford mechanical engineers Kenneth Goodson and Mehdi Asheghi, ensures that the heat rising from the stacked layers of electronics does not degrade overall system performance.

Mitra and Wong have already demonstrated a working prototype of a high-rise chip. At the International Electron Devices Meeting in December 2014 they unveiled a four-layered chip made up of two layers of RRAM memory sandwiched between two layers of CNTs (see “Stanford engineers invent radical ‘high-rise’ 3D chips“).

In their N3XT paper they ran simulations showing how their high-rise approach was a thousand times more efficient in carrying out many important and highly demanding industrial software applications.

 

 

AI will replace smartphones within 5 years, Ericsson survey suggests

(credit: Ericsson ConsumerLab)

Artificial intelligence (AI) interfaces will take over, replacing smartphones in five years, according to a survey of more than 5000 smartphone customers in nine countries by Ericsson ConsumerLab in the fifth edition of its annual trend report, 10 Hot Consumer Trends 2016 (and beyond).

Smartphone users believe AI will take over many common activities, such as searching the net, getting travel guidance, and as personal assistants. The survey found that 44 percent think an AI system would be as good as a teacher and one third would like an AI interface to keep them company. A third would rather trust the fidelity of an AI interface than a human for sensitive matters; and 29 percent agree they would feel more comfortable discussing their medical condition with an AI system.

However, many of the users surveyed find smartphones limited.

Impractical. Constantly having a screen in the palm of your hand is not always a practical solution, such as in driving or cooking.

Battery capacity limits. One in 3 smartphone users want a 7−8 inch screen, creating a battery drain vs. size and weight issue.

Not wearable. 85 percent of the smartphone users think intelligent wearable electronic assistants will be commonplace within 5 years, reducing the need to always touch a screen. And one in two users believes they will be able to talk directly to household appliances.

VR and 3D better. The smartphone users want movies that play virtually around the viewer, virtual tech support, and VR headsets for sports, and more than 50 percent of consumers think holographic screens will be mainstream within 5 years — capabilities not available in a small handheld device. Half of the smartphone users want a 3D avatar to try on clothes online, and 64 percent would like the ability to see an item’s actual size and form when shopping online. Half of the users want to bypass shopping altogether, with a 3D printer for printing household objects such as spoons, toys and spare parts for appliances; 44 percent even want to print their own food or nutritional supplements.

The 10 hot trends for 2016 and beyond cited in the report

  1. The Lifestyle Network Effect. Four out of five people now experience an effect where the benefits gained from online services increases as more people use them. Globally, one in three consumers already participates in various forms of the sharing economy.
  2. Streaming NativesTeenagers watch more YouTube video content daily than other age groups. Forty-six percent of 16-19 year-olds spend an hour or more on YouTube every day.
  3. AI Ends The Screen AgeArtificial intelligence will enable interaction with objects without the need for a smartphone screen. One in two smartphone users think smartphones will be a thing of the past within the next five years.
  4. Virtual Gets RealConsumers want virtual technology for everyday activities such as watching sports and making video calls. Forty-four percent even want to print their own food.
  5. Sensing Homes. Fifty-five percent of smartphone owners believe bricks used to build homes could include sensors that monitor mold, leakage and electricity issues within the next five years. As a result, the concept of smart homes may need to be rethought from the ground up.
  6. Smart CommutersCommuters want to use their time meaningfully and not feel like passive objects in transit. Eighty-six percent would use personalized commuting services if they were available.
  7. Emergency ChatSocial networks may become the preferred way to contact emergency services. Six out of 10 consumers are also interested in a disaster information app.
  8. InternablesInternal sensors that measure well-being in our bodies may become the new wearables. Eight out of 10 consumers would like to use technology to enhance sensory perceptions and cognitive abilities such as vision, memory and hearing.
  9. Everything Gets HackedMost smartphone users believe hacking and viruses will continue to be an issue. As a positive side-effect, one in five say they have greater trust in an organization that was hacked but then solved the problem.
  10. Netizen JournalistsConsumers share more information than ever and believe it increases their influence on society. More than a third believe blowing the whistle on a corrupt company online has greater impact than going to the police.

Source: 10 Hot Consumer Trends 2016. Ericsson ConsumerLab, Information Sharing, 2015. Base: 5,025 iOS/Android smartphone users aged 15-69 in Berlin, Chicago, Johannesburg, London, Mexico City, Moscow, New York, São Paulo, Sydney and Tokyo

Chemicals that make plants defend themselves could replace pesticides

Researchers used the relative induction of GUS activity as a screening tool for identifying new chemical elicitors that induce resistance in rice to the white-backed planthopper Sogatella furcifera (credit: Xingrui He et al./Bioorganic & Medicinal Chemistry Letters)

Chemical triggers that make plants defend themselves against insects could replace pesticides, causing less damage to the environment. New research published in an open-access paper in Bioorganic & Medicinal Chemistry Letters identifies five chemicals that trigger rice plants to fend off a common pest — the white-backed planthopper, Sogatella furcifera.

Pesticides have a detrimental effect on ecosystems, ravaging food chains and damaging the environment. One of the problems with many pesticides is that they kill indiscriminately.

Sogatella furcifera (credit: BIO Photography Group/CNC, Biodiversity Institute of Ontario)

For rice plants, this means pesticides kill the natural enemies of one of their biggest pests, the white-backed planthopper Sogatella furcifera. This pest attacks rice, leading to yellowing or “hopper burn,” which causes the plants to wilt and can damage the grains. It also transmits a virus disease called, southern rice black-streaked dwarf virus, which stunts the plants’ growth and stops them from “heading,” which is when pollination occurs.

Left untreated, many of the insects’ eggs would be eaten, but when pesticides are used, these hatch, leading to even more insects on the plants. What’s more, in some areas as many as a third of the planthoppers are resistant to pesticides.

“The extensive application of chemical insecticides not only causes severe environmental and farm produce pollution but also damages the ecosystem,” explained Dr. Jun Wu, one of the authors of the study and professor at Zhejiang University 
in China. “Therefore, developing safe and effective methods to control insect pests is highly desired; this is why we decided to investigate these chemicals.”

Enhancing plants’ natural defense mechanisms

Plants have natural self-defense mechanisms that kick in when they are infested with pests like the planthopper. This defense mechanism can be switched on using chemicals that do not harm the environment and are not toxic to the insects or their natural enemies.

In the new study, researchers from Zhejiang University 
in China developed a new way of identifying these chemicals. Using a specially designed screening system, they determined to what extent different chemicals switched on the plants’ defense mechanism. The team designed and synthesized 29 phenoxyalkanoic acid derivatives. Of these, they identified five that could be effective at triggering the rice plants to defend themselves.

The researchers used bioassays to show that these chemicals could trigger the plant defense mechanism and repel the white-backed planthopper, which suggests potential use in insect pest management.

“We demonstrate for the first time that some phenoxyalkanoic acid derivatives have the potential to become such plant protection agents against the rice white-backed planthopper,” said Dr. Yonggen Lou, one of the authors of the study and professor at Zhejiang University 
in China. “This new approach to pest management could help protect the ecosystem while defending important crops against attack.”

The next step for the research will be to explore how effective the chemicals are at boosting the plants’ defenses and controlling planthoppers in the field.


Abstract of Finding new elicitors that induce resistance in rice to the white-backed planthopper Sogatella furcifera

Herein we report a new way to identify chemical elicitors that induce resistance in rice to herbivores. Using this method, by quantifying the induction of chemicals for GUS activity in a specific screening system that we established previously, 5 candidate elicitors were selected from the 29 designed and synthesized phenoxyalkanoic acid derivatives. Bioassays confirmed that these candidate elicitors could induce plant defense and then repel feeding of white-backed planthopper Sogatella furcifera.

Parkinson’s disease researchers discover a way to reprogram the genome to produce dopamine neurons

Image shows a protein found only in neurons (red) and an enzyme that synthesizes dopamine (green). Cell DNA is labeled in blue. (credit: Jian Feng, University at Buffalo)

Parkinson’s disease researchers at the Jacobs School of Medicine and Biomedical Sciences at the University at Buffalo have developed a way to ramp up the conversion of skin cells into neurons that can produce dopamine.

For decades, the elusive holy grail in Parkinson’s disease research has been finding a way to repair faulty dopamine neurons and put them back into patients, where they will start producing dopamine again. Researchers have tried fetal material, which is difficult to obtain and of variable quality, and embryonic stem cells (a long process with a low yield), and more recently, skin cells (difficult to obtain sufficient quantities of neurons).

To control movement and balance, dopamine signals travel from the substantia nigra in the midbrain up to brain regions including the corpus striatum, the globus pallidus, and the thalamus. But in Parkinson’s disease, most of the dopamine signals from the substantia nigra are lost. (credit: NIH)

Bypassing the cellular “gatekeeper”

The new UB research, published Dec. 7 in an open-access article in Nature Communications, is based on their discovery that p53, a transcription factor protein, acts as a gatekeeper protein.

“We found that p53 tries to maintain the status quo in a cell; it guards against changes from one cell type to another,” explained Jian Feng, PhD, senior author and professor in the Department of Physiology and Biophysics in the Jacobs School of Medicine and Biomedical Sciences at UB.

This is, p53 acts as a kind of gatekeeper protein to prevent conversion into another type of cell. “Once we lowered the expression of p53, then things got interesting: We were able to reprogram the [skin cell] fibroblasts into neurons much more easily.”

The advance may also be important for basic cell biology, Feng said. “This is a generic way for us to change cells from one type to another,” he said. “It proves that we can treat the cell as a software system when we remove the barriers to change. If we can identify transcription factor combinations that control which genes are turned on and off, we can change how the genome is being read. We might be able to play with the system more quickly and we might be able to generate tissues similar to those in the body, even brain tissue.

“People like to think that things proceed in a hierarchical way, that we start from a single cell and develop into an adult with about 40 trillion cells, but our results prove that there is no hierarchy,” he continued. “All our cells have the same source code as our first cell; this code is read differently to generate all types of cells that make up the body.”

Generating new dopamine neurons via cellular conversion

Timing was key to their success.  “We found that the point in the cell cycle just before the cell tries to sense its environment to ensure that all is ready for duplicating the genome is the prime time when the cell is receptive to change,” said Feng.

By lowering the genomic gatekeeper p53 at the right time of cell cycle, they could easily turn the skin cells into dopamine neurons, using transcription-factor combinations discovered in previous studies. These manipulations turn on the expression of Tet1, a DNA modification enzyme that changes how the genome is read.

“Our method is faster and much more efficient than previously developed ones,” said Feng. “The best previous method could take two weeks to produce 5 percent dopamine neurons. With ours, we got 60 percent dopamine neurons in ten days.”

The researchers have done multiple experiments to prove that these neurons are functional mid-brain dopaminergic neurons, the type lost in Parkinson’s disease.

The finding may enable researchers to generate patient-specific neurons in a dish that could then be transplanted into the brain to repair the faulty neurons, or used to efficiently screen new treatments for Parkinson’s disease.


Abstract of Cell cycle and p53 gate the direct conversion of human fibroblasts to dopaminergic neurons

The direct conversion of fibroblasts to induced dopaminergic (iDA) neurons and other cell types demonstrates the plasticity of cell fate. The low efficiency of these relatively fast conversions suggests that kinetic barriers exist to safeguard cell-type identity. Here we show that suppression of p53, in conjunction with cell cycle arrest at G1 and appropriate extracellular environment, markedly increase the efficiency in the transdifferentiation of human fibroblasts to iDA neurons by Ascl1, Nurr1, Lmx1a and miR124. The conversion is dependent on Tet1, as G1 arrest, p53 knockdown or expression of the reprogramming factors induces Tet1 synergistically. Tet1 knockdown abolishes the transdifferentiation while its overexpression enhances the conversion. The iDA neurons express markers for midbrain DA neurons and have active dopaminergic transmission. Our results suggest that overcoming these kinetic barriers may enable highly efficient epigenetic reprogramming in general and will generate patient-specific midbrain DA neurons for Parkinson’s disease research and therapy.

Can physical activity make you learn better?

This is an artistic representation of the take home messages in Lunghi and Sale: “A cycling lane for brain rewiring,” which is that physical activity (such as cycling) is associated with increased brain plasticity. (credit: Dafne Lunghi Art)

Exercise may enhance plasticity of the adult brain — the ability of our neurons to change with experience — which is essential for learning, memory, and brain repair, Italian researchers report in an open-access paper in the Cell Press journal Current Biology.

Their research, which focused on the the visual cortex, may offer hope for people with traumatic brain injury or eye conditions such as amblyopia, the researchers suggest. “We provide the first demonstration that moderate levels of physical activity enhance neuroplasticity in the visual cortex of adult humans,” says Claudia Lunghi of the University of Pisa in Italy.

Brain plasticity is generally thought to decline with age, especially in the sensory region of the brain (such as vision). But previous studies by research colleague Alessandro Sale of the National Research Council’s Neuroscience Institute  showed that animals performing physical activity — for example rats running on a wheel — showed elevated levels of plasticity in the visual cortex and had improved recovery from amblyopia compared  to more sedentary animals.

Binocular rivalry test

Binocular rivalry before and after “monocular deprivation” (reduced vision due to a patch) for inactive and active groups (credit: Claudia Lunghi and Alessandro Sale/Current Biology)

To find out whether the same might hold true for people, the researchers used a simple test of binocular rivalry. When people have one eye patched for a short period of time, the closed eye becomes stronger as the visual brain attempts to compensate for the lack of visual input. This recovered strength (after the eye patch is removed) is a measure of the brain’s visual plasticity.

In the new study, Lunghi and Sale put 20 adults through this test twice. In one test, participants with the dominant eye patched with a translucent material watched a movie while relaxing in a chair. In the other test, participants with one eye patched also watched a movie, but while exercising on a stationary bike for ten-minute intervals during the movie.

Exercise enhances brain plasticity (at least for vision)

Result: brain plasticity in the patched eye was enhanced by the exercise. After physical activity, the patched eye was strengthened more quickly (indicating increased levels of brain plasticity) than with the couch potatoes.

While further study is needed, the researchers think this stronger vision may have resulted from a decrease in an inhibitory neurotransmitter called GABA caused by exercise, allowing the brain to become more responsive.

The findings suggest that exercise may play an important role in brain health and recovery. This could be especially good news for people with amblyopia (called “lazy eye” because the brain “turns off” the visual processing of the weak eye to prevent double vision) — generally considered to be untreatable in adults.

Lunghi and Sale say they now plan to investigate the effects of moderate levels of physical exercise on visual function in amblyopic adult patients and to look deeper into the underlying neural mechanisms.

Time for a walk or bike ride?

UPDATE Dec. 10, 2o15: title wording changed from “smarter” to “learn better.”


Abstract of A cycling lane for brain rewiring

Brain plasticity, defined as the capability of cerebral neurons to change in response to experience, is fundamental for behavioral adaptability, learning, memory, functional development, and neural repair. The visual cortex is a widely used model for studying neuroplasticity and the underlying mechanisms. Plasticity is maximal in early development, within the so-called critical period, while its levels abruptly decline in adulthood. Recent studies, however, have revealed a significant residual plastic potential of the adult visual cortex by showing that, in adult humans, short-term monocular deprivation alters ocular dominance by homeostatically boosting responses to the deprived eye. In animal models, a reopening of critical period plasticity in the adult primary visual cortex has been obtained by a variety of environmental manipulations, such as dark exposure, or environmental enrichment, together with its critical component of enhanced physical exercise. Among these non-invasive procedures, physical exercise emerges as particularly interesting for its potential of application to clinics, though there has been a lack of experimental evidence available that physical exercise actually promotes visual plasticity in humans. Here we report that short-term homeostatic plasticity of the adult human visual cortex induced by transient monocular deprivation is potently boosted by moderate levels of voluntary physical activity. These findings could have a bearing in orienting future research in the field of physical activity application to clinical research.

As the worm turns: research tracks how an embryo’s brain is assembled

The image on the left shows skin cells (green dots) and neurons (red cell) marking the shape of the embryo. The image on the right shows the skin cells connected by the software to make a computerized model of how the embryo folds and twists. (credit: Hari Shroff, National Institute of Biomedical Imaging and Bioengineering)

New open-source software that can help track the embryonic development and movement of neuronal cells throughout the body of a worm is now available to scientists. The software is described in a paper published in the open access journal, eLife on December 3rd by a research team*.

One significant challenge is determining the formation of complex neuronal structures made up of billions of cells in the human brain. As with many biological challenges, researchers are first examining this question in simpler organisms, such as worms.

Although scientists have identified a number of important proteins that determine how neurons navigate during brain formation, it’s largely unknown how all of these proteins interact in a living organism.

Model animals, despite their differences from humans, have already revealed much about human physiology because they are much simpler and easier to understand. In this case, researchers chose Caenorhabditis elegans (C. elegans), because it has only 302 neurons, 222 of which form while the worm is still an embryo.

While some of these neurons go to the worm nerve ring (brain) they also spread along the ventral nerve cord, which is broadly analogous to the spinal cord in humans.  The worm even has its own versions of many of the same proteins used to direct brain formation in more complex organisms such as flies, mice, or humans.

Tracking neurons: a complex task
“Understanding why and how neurons form and the path they take to reach their final destination could one day give us valuable information about how proteins and other molecular factors interact during neuronal development,” said Hari Shroff, Ph.D., head of the NIBIB research team.
We don’t yet understand neurodevelopment even in the context of the humble worm, but we’re using it as a simple model of how these factors work together to drive the development of the worm brain and neuronal structure. We’re hoping that by doing so, some of the lessons will translate all the way up to humans.”

These four images show the step by step process the computer program goes through to the “untwist” a worm image. First, (Image D) the computer identifies the cells to track. Then (Image E) the computer begins to create a lattice that traces the shape of the worm. Once the computer has traced the lattice, it can create a 3D model of the worm embryo (Image F). Finally, it can untwist the model (Image G). (credit: Hari Shroff, National Institute of Biomedical Imaging and Bioengineering)

However, following neurons as they travel through the worm during its embryonic development is not as simple as it might seem. The first challenge was to create new microscopes that could record the embryogenesis of these worms without damaging them through too much light exposure while still getting the resolution needed to clearly see individual cells.

Shroff and his team at NIBIB, in collaboration with Daniel Colon-Ramos at Yale University and Zhirong Bao at Sloan-Kettering, tackled this problem by developing new microscopes that improved the speed and resolution at which they could image worm embryonic development.  (Read more)

The second problem was that during development, the worm begins to “twitch,” moving around inside the egg. The folding and twisting makes it hard to track cells and parse out movement. For example, if a neuron moves in the span of a couple of minutes, is it because the embryo twisted or because the neuron actually changed position within the embryo?

Understanding the mechanisms that move neurons to their final destination is an important factor in understanding how brains form–and is difficult to determine without knowing where and how a neuron is moving. Finally, it can be challenging to determine where a neuron is in 3D space while looking at a two-dimensional image–especially of a worm that’s folded up.

The worm embryo is normally transparent, but the researchers made several cells in the embryo glow with fluorescent proteins to act as markers.

When a microscopic image of these cells is fed into the program, the computer identifies each cell and uses the information to create a model of the worm, which it then computationally “untwists” to generate a straightened image. The program also enables a user to check the accuracy of the computer model and edit it when any mistakes are discovered.

“In addition, users can also mark cells or structures within the worm embryo they want the program to track, allowing the users to follow the position of a cell as it moves and grows in the developing embryo. This feature could help scientists understand how certain cells develop into neurons, as opposed to other types of cells, and what factors influence the development of the brain and neuronal structure.

A worm atlas

Shroff and his colleagues say that such technology will be pivotal in their project to create a 4D neurodevelopmental “worm atlas,” (see also http://www.wormguides.org) that attempts to catalog the formation of the worm nervous system.

This catalog will be the first comprehensive view of how an entire nervous system develops, and the researchers believe that it will be helpful in understanding the fundamental mechanisms by which all nervous systems, including ours, assemble. They also expect that some of the concepts developed, such as the approach taken to combine neuronal data from multiple embryos, can be applied to additional model organisms besides the worm.

* Researchers at the National Institute of Biomedical Imaging and Bioengineering (NIBIB) and the Center for Information Technology (CIT); along with Memorial Sloan-Kettering Institute, New York City; Yale University, New Haven, Connecticut; Zhejiang University, China; and the University of Connecticut Health Center, Farmington. NIBIB is part of the National Institutes of Health.


NIBIB | C. Elegans Embryo Development


Abstract of Untwisting the Caenorhabditis elegans embryo

The nematode Caenorhabditis elegans possesses a simple embryonic nervous system comprising 222 neurons, a number small enough that the growth of each cell could be followed to provide a systems-level view of development. However, studies of single cell development have largely been conducted in fixed or pre-twitching live embryos, because of technical difficulties associated with embryo movement in late embryogenesis. We present open source untwisting and annotation software which allows the investigation of neurodevelopmental events in post-twitching embryos, and apply them to track the 3D positions of seam cells, neurons, and neurites in multiple elongating embryos. The detailed positional information we obtained enabled us to develop a composite model showing movement of these cells and neurites in an “average” worm embryo. The untwisting and cell tracking capability we demonstrate provides a foundation on which to catalog C. elegans neurodevelopment, allowing interrogation of developmental events in previously inaccessible periods of embryogenesis.

How robots can learn from babies

A collaboration between UW developmental psychologists and computer scientists aims to enable robots to learn in the same way that children naturally do. The team used research on how babies follow an adult’s gaze to “teach” a robot to perform the same task. (credit: University of Washington)

Babies learn about the world by exploring how their bodies move in space, grabbing toys, pushing things off tables and by watching and imitating what adults are doing. So instead of laboriously writing code (or moving a robot’s arm or body to show it how to perform an action), why not just let them learn like babies?

That’s exactly what University of Washington (UW) developmental psychologists and computer scientists have now demonstrated in experiments that suggest that robots can “learn” much like kids — by amassing data through exploration, watching a human do something, and determining how to perform that task on its own.

That new method would allow someone who doesn’t know anything about computer programming to be able to teach a robot by demonstration — showing the robot how to clean your dishes, fold your clothes, or do household chores.

“But to achieve that goal, you need the robot to be able to understand those actions and perform them on their own,” said Rajesh Rao, a UW professor of computer science and engineering and senior author of an open-access paper in the journal PLoS ONE.

In the paper, the UW team developed a new probabilistic model aimed at solving a fundamental challenge in robotics: building robots that can learn new skills by watching people and imitating them. The roboticists collaborated with UW psychology professor and I-LABS co-director Andrew Meltzoff, whose seminal research has shown that children as young as 18 months can infer the goal of an adult’s actions and develop alternate ways of reaching that goal themselves.

In one example, infants saw an adult try to pull apart a barbell-shaped toy, but the adult failed to achieve that goal because the toy was stuck together and his hands slipped off the ends. The infants watched carefully and then decided to use alternate methods — they wrapped their tiny fingers all the way around the ends and yanked especially hard — duplicating what the adult intended to do.

Machine-learning algorithms based on play

This robot used the new UW model to imitate a human moving toy food objects around a tabletop. By learning which actions worked best with its own geometry, the robot could use different means to achieve the same goal — a key to enabling robots to learn through imitation. (credit: University of Washington)

Children acquire intention-reading skills, in part, through self-exploration that helps them learn the laws of physics and how their own actions influence objects, eventually allowing them to amass enough knowledge to learn from others and to interpret their intentions. Meltzoff thinks that one of the reasons babies learn so quickly is that they are so playful.

“Babies engage in what looks like mindless play, but this enables future learning. It’s a baby’s secret sauce for innovation,” Meltzoff said. “If they’re trying to figure out how to work a new toy, they’re actually using knowledge they gained by playing with other toys. During play they’re learning a mental model of how their actions cause changes in the world. And once you have that model you can begin to solve novel problems and start to predict someone else’s intentions.”

Rao’s team used that infant research to develop machine learning algorithms that allow a robot to explore how its own actions result in different outcomes. Then the robot uses that learned probabilistic model to infer what a human wants it to do and complete the task, and even to “ask” for help if it’s not certain it can.

How to follow a human’s gaze

The team tested its robotic model in two different scenarios: a computer simulation experiment in which a robot learns to follow a human’s gaze, and another experiment in which an actual robot learns to imitate human actions involving moving toy food objects to different areas on a tabletop.

In the gaze experiment, the robot learns a model of its own head movements and assumes that the human’s head is governed by the same rules. The robot tracks the beginning and ending points of a human’s head movements as the human looks across the room and uses that information to figure out where the person is looking. The robot then uses its learned model of head movements to fixate on the same location as the human.

The team also recreated one of Meltzoff’s tests that showed infants who had experience with visual barriers and blindfolds weren’t interested in looking where a blindfolded adult was looking, because they understood the person couldn’t actually see. Once the team enabled the robot to “learn” what the consequences of being blindfolded were, it no longer followed the human’s head movement to look at the same spot.

Smart movements: beyond mimicking

In the second experiment, the team allowed a robot to experiment with pushing or picking up different objects and moving them around a tabletop. The robot used that model to imitate a human who moved objects around or cleared everything off the tabletop. Rather than rigidly mimicking the human action each time, the robot sometimes used different means to achieve the same ends.

“If the human pushes an object to a new location, it may be easier and more reliable for a robot with a gripper to pick it up to move it there rather than push it,” said lead author Michael Jae-Yoon Chung, a UW doctoral student in computer science and engineering. “But that requires knowing what the goal is, which is a hard problem in robotics and which our paper tries to address.”

Though the initial experiments involved learning how to infer goals and imitate simple behaviors, the team plans to explore how such a model can help robots learn more complicated tasks.

“Babies learn through their own play and by watching others,” says Meltzoff, “and they are the best learners on the planet — why not design robots that learn as effortlessly as a child?”

That raises a question: can babies also learn from robots they’ve taught — in a closed loop? And where might that eventually take education — and civilization?


Abstract of A Bayesian Developmental Approach to Robotic Goal-Based Imitation Learning

A fundamental challenge in robotics today is building robots that can learn new skills by observing humans and imitating human actions. We propose a new Bayesian approach to robotic learning by imitation inspired by the developmental hypothesis that children use self-experience to bootstrap the process of intention recognition and goal-based imitation. Our approach allows an autonomous agent to: (i) learn probabilistic models of actions through self-discovery and experience, (ii) utilize these learned models for inferring the goals of human actions, and (iii) perform goal-based imitation for robotic learning and human-robot collaboration. Such an approach allows a robot to leverage its increasing repertoire of learned behaviors to interpret increasingly complex human actions and use the inferred goals for imitation, even when the robot has very different actuators from humans. We demonstrate our approach using two different scenarios: (i) a simulated robot that learns human-like gaze following behavior, and (ii) a robot that learns to imitate human actions in a tabletop organization task. In both cases, the agent learns a probabilistic model of its own actions, and uses this model for goal inference and goal-based imitation. We also show that the robotic agent can use its probabilistic model to seek human assistance when it recognizes that its inferred actions are too uncertain, risky, or impossible to perform, thereby opening the door to human-robot collaboration.

‘Nanobombs’ that blow up cancer cells

(A) Three agents are encapsulated inside the nanoparticles: miR-34a mRNA drug for gene therapy of prostate cancer stem cells, indocyanine green (ICG) for absorbing laser light, and ammonium bicarbonate for gas generation under heating. (B) Laser light causes the nanoparticles to expand, penetrating the cancer cell’s endosomal/lysosomal barrier (green circles), blowing up cancer cells (yellow), and releasing the miR-34a drug to inhibit the protein CD44, which is crucial for cancer stem cell survival. (credit: Hai Wang et al./Advanced Materials)

Researchers at The Ohio State University Comprehensive Cancer Center have developed nanoparticles that swell and burst when exposed to near-infrared laser light.

These “nanobombs” may be able to kill cancer cells outright, or at least stall their growth — overcoming a biological barrier that has blocked development of drug agents that attempt to alter cancer-cell gene expression (conversion of genes to proteins). These kinds of drug agents are generally forms of RNA (ribonucleic acid), and are notoriously difficult to use as drugs for two main reasons:

  • They are quickly degraded when free in the bloodstream.
  • When ordinary nanoparticles are taken up by cancer cells, the cancer cells often enclose them in small compartments called endosomes, preventing the drug molecules from reaching their target, and degrading them.

Zapping tumors and cancer stem cells with laser light and nanoparticles 

In this new study, published in the journal Advanced Materials, the researchers packaged nanoparticles with the RNA agent (drug) and ammonium bicarbonate, causing the nanoparticles to swell (as it does in baking bread) three times or more in size when exposed to the heat generated by near-infrared laser light. That causes the endosomes to burst, dispersing the therapeutic RNA drug into the cell.

For their study, the researchers used human prostate cancer cells and tumors in an animal model. The nanoparticles were equipped to target cancer-stem-like cells (CSCs), which are cancer cells with properties of stem cells. CSCs often resist therapy and are thought to play an important role in cancer development and recurrence.

The therapeutic agent in the nanoparticles was a form of microRNA called miR-34a. The researchers chose this molecule because it can lower the levels of a protein (CD44) that is crucial for CSC survival.

Near-infrared light can penetrate tissue to a depth of one centimeter or more, depending on laser-light wavelength and power (see “‘Golden window’ wavelength range for optimal deep-brain near-infrared imaging determined”). For deeper tumors, the light would be delivered using minimally invasive surgery.


Abstract of A Near-Infrared Laser-Activated “Nanobomb” for Breaking the Barriers to MicroRNA Delivery

A near-infrared laser-activated “nanobomb” is synthesized using lipid and multiple polymers to break the extra­cellular and intracellular barriers to cytosolic delivery of microRNAs. The nanobomb could be used to effectively destroy tumors and cancer stem-like cells in vitro and in vivo with minimal side effect.