Older people in Germany and England getting smarter, but not fitter

(credit: iStock)

People over age 50 are scoring better on cognitive tests than people of the same age did in the past — a trend that could be linked to higher education rates and increased use of technology in our daily lives, according to a new study published in an open-access paper in the journal PLOS ONE. But the study also showed that average physical health of the older population has declined.

The study, by researchers at the International Institute for Applied Systems Analysis (IIASA) in Austria, relied on representative survey data from Germany that measured cognitive processing speed, physical fitness, and mental health in 2006 and again in 2012.

It found that cognitive test scores increased significantly within the six-year period (for men and women and at all ages from 50 to 90 years), while physical functioning and mental health declined, especially for low-educated men aged 50–64. The survey data was representative of the non-institutionalized German population, mentally and physically able to participate in the tests.

Cognition normally begins to decline with age, and is one key characteristic that demographers use to understand how different population groups age more successfully than others, according to IIASA population experts.

Changing lifestyles

Previous studies have found elderly people to be in increasingly good health — “younger” in many ways than previous generations at the same chronological age — with physical and cognitive measures all showing improvement over time. The new study is the first to show divergent trends over time between cognitive and physical function.

“We think that these divergent results can be explained by changing lifestyles,” says IIASA World Population Program researcher Nadia Steiber, author of the PLOS ONE study. “Life has become cognitively more demanding, with increasing use of communication and information technology also by older people, and people working longer in intellectually demanding jobs. At the same time, we are seeing a decline in physical activity and rising levels of obesity.”

A second study from IIASA population researchers, published last week in the journal Intelligence found similar results, suggesting that older people have also become smarter in England.

“On average, test scores of people aged 50+ today correspond to test scores from people 4–8 years younger and tested 6 years earlier,” says Valeria Bordone, a researcher at IIASA and the affiliated Wittgenstein Centre for Demography and Global Human Capital.

The studies both provide confirmation of the “Flynn effect” — a trend in rising performance in standard IQ tests from generation to generation. The studies show that changes in education levels in the population can explain part, but not all of the effect.

“We show for the first time that although compositional changes of the older population in terms of education partly explain the Flynn effect, the increasing use of modern technology such as computers and mobile phones in the first decade of the 2000s also contributes considerably to its explanation,” says Bordone.

The researchers note that while the findings apply to Germany and England, future research may provide evidence on other countries.


IIASA | Rethinking population aging


Abstract of Population Aging at Cross-Roads: Diverging Secular Trends in Average Cognitive Functioning and Physical Health in the Older Population of Germany

This paper uses individual-level data from the German Socio-Economic Panel to model trends in population health in terms of cognition, physical fitness, and mental health between 2006 and 2012. The focus is on the population aged 50–90. We use a repeated population-based cross-sectional design. As outcome measures, we use SF-12 measures of physical and mental health and the Symbol-Digit Test (SDT) that captures cognitive processing speed. In line with previous research we find a highly significant Flynn effect on cognition; i.e., SDT scores are higher among those who were tested more recently (at the same age). This result holds for men and women, all age groups, and across all levels of education. While we observe a secular improvement in terms of cognitive functioning, at the same time, average physical and mental health has declined. The decline in average physical health is shown to be stronger for men than for women and found to be strongest for low-educated, young-old men aged 50–64: the decline over the 6-year interval in average physical health is estimated to amount to about 0.37 SD, whereas average fluid cognition improved by about 0.29 SD. This pattern of results at the population-level (trends in average population health) stands in interesting contrast to the positive association of physical health and cognitive functioning at the individual-level. The findings underscore the multi-dimensionality of health and the aging process.


Abstract of Smarter every day: The deceleration of population ageing in terms of cognition

Cognitive decline correlates with age-associated health risks and has been shown to be a good predictor of future morbidity and mortality. Cognitive functioning can therefore be considered an important measure of differential aging across cohorts and population groups. Here, we investigate if and why individuals aged 50+ born into more recent cohorts perform better in terms of cognition than their counterparts of the same age born into earlier cohorts (Flynn effect). Based on two waves of English and German survey data, we show that cognitive test scores of participants aged 50+ in the later wave are higher compared with those of participants aged 50+ in the earlier wave. The mean scores in the later wave correspond to the mean scores in the earlier wave obtained by participants who were on average 4–8 years younger. The use of a repeat cross-sectional design overcomes potential bias from retest effects. We show for the first time that although compositional changes of the older population in terms of education partly explain the Flynn effect, the increasing use of modern technology (i.e., computers and mobile phones) in the first decade of the 2000s also contributes to its explanation.

Speech-classifier program is better at predicting psychosis than psychiatrists

This image shows discrimination between at-risk youths who transitioned to psychosis (red) and those who did not (blue). The polyhedron contains all the at-risk youth who did NOT develop psychosis (blue). All of the at-risk youth who DID later develop psychosis (red) are outside the polyhedron. Thus the speech classifier had 100 percent discrimination or accuracy. The speech classifier consisted of “minimum semantic coherence” (the flow of meaning from one sentence to the next), and indices of reduced complexity of speech, including phrase length and decreased use of “determiner” pronouns (“that,” “what,” “whatever,” “which,” and “whichever”). (credit: Cheryl Corcoran et al./NPJ Schizophrenia/Columbia University Medical Center)

An automated speech analysis program correctly differentiated between at-risk young people who developed psychosis over a later two-and-a-half year period and those who did not.

In a proof-of-principle study, researchers at Columbia University Medical Center, New York State Psychiatric Institute, and the IBM T. J. Watson Research Center found that the computerized analysis provided a more accurate classification than clinical ratings.  The study was published Wednesday Aug. 26 in an open-access paper in NPJ-Schizophrenia.

About one percent of the population between the ages of 14 and 27 is considered to be at clinical high risk (CHR) for psychosis. CHR individuals have symptoms such as unusual or tangential thinking, perceptual changes, and suspiciousness. About 20% will go on to experience a full-blown psychotic episode. Identifying who falls in that 20% category before psychosis occurs has been an elusive goal. Early identification could lead to intervention and support that could delay, mitigate or even prevent the onset of serious mental illness.

Measuring psychosis

Speech provides a unique window into the mind, giving important clues about what people are thinking and feeling. Participants in the study took part in an open-ended, narrative interview in which they described their subjective experiences. These interviews were transcribed and then analyzed by computer for patterns of speech, including semantics (meaning) and syntax (structure).

The analysis established each patient’s semantic coherence (how well he or she stayed on topic), and syntactic structure, such as phrase length and use of determiner words that link the phrases. A clinical psychiatrist may intuitively recognize these signs of disorganized thoughts in a traditional interview, but a machine can augment what is heard by precisely measuring the variables. The participants were then followed for two and a half years.

The speech features that predicted psychosis onset included breaks in the flow of meaning from one sentence to the next, and speech that was characterized by shorter phrases with less elaboration.

Speech classifier: 100% accurate

The speech classifier tool developed in this study to mechanically sort these specific, symptom-related features is striking for achieving 100% accuracy.  The computer analysis correctly differentiated between the five individuals who later experienced a psychotic episode and the 29 who did not.

These results suggest that this method may be able to identify thought disorder in its earliest, most subtle form, years before the onset of psychosis. Thought disorder is a key component of schizophrenia, but quantifying it has proved difficult.

For the field of schizophrenia research, and for psychiatry more broadly, this opens the possibility that new technology can aid in prognosis and diagnosis of severe mental disorders, and track treatment response. Automated speech analysis is inexpensive, portable, fast, and non-invasive. It has the potential to be a powerful tool that can complement clinical interviews and ratings.

Further research with a second, larger group of at-risk individuals is needed to see if this automated capacity to predict psychosis onset is both robust and reliable. Automated speech analysis used in conjunction with neuroimaging may also be useful in reaching a better understanding of early thought disorder, and the paths to develop treatments for it.


Abstract of Automated analysis of free speech predicts psychosis onset in high-risk youths

Background/Objectives: Psychiatry lacks the objective clinical tests routinely used in other specializations. Novel computerized methods to characterize complex behaviors such as speech could be used to identify and predict psychiatric illness in individuals.

AIMS: In this proof-of-principle study, our aim was to test automated speech analyses combined with Machine Learning to predict later psychosis onset in youths at clinical high-risk (CHR) for psychosis.

Methods: Thirty-four CHR youths (11 females) had baseline interviews and were assessed quarterly for up to 2.5 years; five transitioned to psychosis. Using automated analysis, transcripts of interviews were evaluated for semantic and syntactic features predicting later psychosis onset. Speech features were fed into a convex hull classification algorithm with leave-one-subject-out cross-validation to assess their predictive value for psychosis outcome. The canonical correlation between the speech features and prodromal symptom ratings was computed.

Results: Derived speech features included a Latent Semantic Analysis measure of semantic coherence and two syntactic markers of speech complexity: maximum phrase length and use of determiners (e.g., which). These speech features predicted later psychosis development with 100% accuracy, outperforming classification from clinical interviews. Speech features were significantly correlated with prodromal symptoms.

Conclusions: Findings support the utility of automated speech analysis to measure subtle, clinically relevant mental state changes in emergent psychosis. Recent developments in computer science, including natural language processing, could provide the foundation for future development of objective clinical tests for psychiatry.

Why you’re smarter than a chicken

Sorry, wrong protein — you’re dinner (credit: Johnathan Nightingale via Flickr)

A single molecular event in a protein called PTBP1 in our cells could hold the key to how we evolved to become the smartest animal on the planet, University of Toronto researchers have discovered.

The conundrum: Humans and frogs, for example, have been evolving separately for 350 million years and use a remarkably similar repertoire of genes to build organs in the body. So what accounts for the vast range of organ size and complexity?

Benjamin Blencowe, a professor in the University of Toronto’s Donnelly Centre and Banbury Chair in Medical Research, and his team believe they now have the key: alternative splicing (AS).

With alternative splicing, the same gene can generate three different types of protein molecules, as in this example (credit: Wikipedia)

Here’s how alternative splicing works: specific sections of a gene called exons may be included or excluded from the final messenger RNA (mRNA) that expresses the gene (creates proteins). And that changes the arrangement of amino acid sequences.

This image shows a frog and human brain, brought to scale. Although the brain-building genes are similar in both, alternative splicing ensures greater protein diversity in human cells, which fuels organ complexity. (credit: Jovana Drinjakovic)

There are two forms of PTBP1: one that is common in all vertebrates, and another in mammals. The researchers showed that in mammalian cells, the presence of the mammalian version of PTBP1 unleashes a cascade of alternative splicing events that lead to a cell becoming a neuron instead of a skin cell, for example.

To prove that, they engineered chicken cells to make mammalian-like PTBP1, and this triggered alternative splicing events that are found in mammals, creating a smart chicken (no relation to the eponymous brand). Also, in turns out that alternative splicing prevalence increases with vertebrate complexity.

The end result: all those small accidental changes across specific genes have fueled the evolution of mammalian brains.

The study is published in the August 20 issue of Science.


Abstract of An alternative splicing event amplifies evolutionary differences between vertebrates

Alternative splicing (AS) generates extensive transcriptomic and proteomic complexity. However, the functions of species- and lineage-specific splice variants are largely unknown. Here we show that mammalian-specific skipping of polypyrimidine tract–binding protein 1 (PTBP1) exon 9 alters the splicing regulatory activities of PTBP1 and affects the inclusion levels of numerous exons. During neurogenesis, skipping of exon 9 reduces PTBP1 repressive activity so as to facilitate activation of a brain-specific AS program. Engineered skipping of the orthologous exon in chicken cells induces a large number of mammalian-like AS changes in PTBP1 target exons. These results thus reveal that a single exon-skipping event in an RNA binding regulator directs numerous AS changes between species. Our results further suggest that these changes contributed to evolutionary differences in the formation of vertebrate nervous systems.

‘I think I know that person … or do I?’

A cross-section of a rat’s brain, showing where the key decisions are made about which is a new memory being made and which is old and familiar (credit: Johns Hopkins University)

Know that feeling when you see someone and realize you may know them (or not)? Now we know actually where in the brain that happens — the CA3 region of the hippocampus, the seat of memory, thanks to Johns Hopkins University neuroscientists.

“You see a familiar face and say to yourself, ‘I think I’ve seen that face.’ But is this someone I met five years ago, maybe with thinner hair or different glasses — or is it someone else entirely?” said James J. Knierim, a professor of neuroscience at the university’s Zanvyl Krieger Mind/Brain Institute who led the research, described in the current issue of the journal Neuron.

Is that you under that beard? Oops, excuse me. “That’s one of the biggest problems our memory system has to solve, Kneirim said. “The final job of the CA3 region is to make the decision: Is it the same or is it different? Usually you are correct in remembering that this person is a slightly different version of the person you met years ago.

“But when you are wrong, and it embarrassingly turns out that this is a complete stranger, you want to create a memory of this new person that is absolutely distinct from the memory of your familiar friend, so you don’t make the mistake again.”

Would you like chocolate sprinkles on that cheese? Knierim and associates implanted electrodes in the hippocampus of rats and monitored them as they got to know an environment and as that environment changed. They trained the rats to run around a track, eating chocolate sprinkles. The track floor had four different textures — sandpaper, carpet padding, duct tape and a rubber mat.

The rat could see, feel and smell the differences in the textures. Meanwhile, a black curtain surrounding the track had various objects attached to it. Over 10 days, the rats built mental maps of that environment.

Messing with rat minds for fun and science. Then the experimenters changed things up. They rotated the track counter-clockwise, while rotating the curtain clockwise, creating a perceptual mismatch in the rats’ minds. The effect was similar, Knierim said, to if you opened the door of your home and all of your pictures were hanging on different walls and your furniture had been moved.

“Would you recognize it as your home or think you are lost?” he said. “It’s a very disorienting experience and a very uncomfortable feeling.”

Even when the perceptual mismatch between the track and curtain was small, the “pattern-separating” part of CA3 almost completely changed its activity patterns, creating a new memory of the altered environment. But the “pattern-completing” part of CA3 tended to retrieve a similar activity pattern used to encode the original memory, even when the perceptual mismatch increased.

The findings, which validate models about how memory works, could help explain what goes wrong with memory in diseases like Alzheimer’s and could help to preserve people’s memories as they age.

This research was supported by the National Institutes of Health grants and by the Johns Hopkins University Brain Sciences Institute.


Abstract of Neural population evidence of functional heterogeneity along the CA3 transverse axis: Pattern completion vs. pattern separation

Classical theories of associative memory model CA3 as a homogeneous attractor network because of its strong recurrent circuitry. However, anatomical gradients suggest a functional diversity along the CA3 transverse axis. We examined the neural population coherence along this axis, when the local and global spatial reference frames were put in conflict with each other. Proximal CA3 (near the dentate gyrus), where the recurrent collaterals are the weakest, showed degraded representations, similar to the pattern separation shown by the dentate gyrus. Distal CA3 (near CA2), where the recurrent collaterals are the strongest, maintained coherent representations in the conflict situation, resembling the classic attractor network system. CA2 also maintained coherent representations. This dissociation between proximal and distal CA3 provides strong evidence that the recurrent collateral system underlies the associative network functions of CA3, with a separate role of proximal CA3 in pattern separation.

A brain-computer interface for controlling an exoskeleton

A volunteer calibrating the exoskeleton brain-computer interface (credit: (c) Korea University/TU Berlin)

Scientists at Korea University and TU Berlin have developed a brain-computer interface (BCI) for a lower limb exoskeleton used for gait assistance by decoding specific signals from the user’s brain.

LEDs flickering at five different frequencies code for five different commands (credit: Korea University/TU Berlin)

Using an electroencephalogram (EEG) cap, the system allows users to move forward, turn left and right, sit, and stand, simply by staring at one of five flickering light emitting diodes (LEDs).

Each of the five LEDs flickers at a different frequency, corresponding to five types of movements. When the user focuses their attention on a specific LED, the flickering light generates a visual evoked potential in the EEG signal, which is then identified by a computer and used to control the exoskeleton to move in the appropriate manner (forward, left, right, stand, sit).


Korea University/TU Berlin | A brain-computer interface for controlling an exoskeleton

The results are published in an open-access paper today (August 18) in the Journal of Neural Engineering.

“A key problem is designing such a system is that exoskeletons create lots of electrical ‘noise,’” explains Klaus Muller, an author of the paper. “The EEG signal [from the brain] gets buried under all this noise, but our system is able to separate out the EEG signal and the frequency of the flickering LED within this signal.”

“People with amyotrophic lateral sclerosis (ALS) (motor neuron disease) or spinal cord injuries face difficulties communicating or using their limbs,” he said. This system could let them walk again, he believes. He suggests that the control system could be added on to existing BCI devices, such as Open BCI devices.

In experiments with 11 volunteers, it only took them a few minutes to be trained in operating the system. Because of the flickering LEDs, they were carefully screened for epilepsy prior to taking part in the research. The researchers are now working to reduce the “visual fatigue” associated with longer-term use.


Abstract of A lower limb exoskeleton control system based on steady state visual evoked potentials

Objective. We have developed an asynchronous brain–machine interface (BMI)-based lower limb exoskeleton control system based on steady-state visual evoked potentials (SSVEPs). 

Approach. By decoding electroencephalography signals in real-time, users are able to walk forward, turn right, turn left, sit, and stand while wearing the exoskeleton. SSVEP stimulation is implemented with a visual stimulation unit, consisting of five light emitting diodes fixed to the exoskeleton. A canonical correlation analysis (CCA) method for the extraction of frequency information associated with the SSVEP was used in combination with k-nearest neighbors.

Main results. Overall, 11 healthy subjects participated in the experiment to evaluate performance. To achieve the best classification, CCA was first calibrated in an offline experiment. In the subsequent online experiment, our results exhibit accuracies of 91.3 ± 5.73%, a response time of 3.28 ± 1.82 s, an information transfer rate of 32.9 ± 9.13 bits/min, and a completion time of 1100 ± 154.92 s for the experimental parcour studied. 

Significance. The ability to achieve such high quality BMI control indicates that an SSVEP-based lower limb exoskeleton for gait assistance is becoming feasible.

Most complete functioning human-brain model to date, according to researchers

This image of the lab-grown brain is labeled to show identifiable structures: the cerebral hemisphere, the optic stalk, and the cephalic flexure, a bend in the mid-brain region, all characteristic of the human fetal brain (credit: The Ohio State University)

Scientists at The Ohio State University have developed a miniature human brain in a dish with the equivalent brain maturity of a five-week-old fetus.

The brain organoid, engineered from adult human skin cells, is the most complete human brain model yet developed, said Rene Anand, a professor of biological chemistry and pharmacology at Ohio State.

The lab-grown brain, about the size of a pencil eraser, has an identifiable structure and contains 99 percent of the genes present in the human fetal brain. Such a system will enable ethical and more rapid, accurate testing of experimental drugs before the clinical trial stage. It is intended to advance studies of genetic and environmental causes of central nervous system disorders.

“It not only looks like the developing brain, its diverse cell types express nearly all genes like a brain,” Anand said. “The power of this brain model bodes very well for human health because it gives us better and more relevant options to test and develop therapeutics other than [using] rodents.”

Anand reported on his lab-grown brain today (August 18) at the 2015 Military Health System Research Symposium in Ft. Lauderdale, Florida.

The main thing missing in this model is a vascular system. But what is there — a spinal cord, all major regions of the brain, multiple cell types, signaling circuitry and even a retina — has the potential to dramatically accelerate the pace of neuroscience research, said Anand, who is also a professor of neuroscience.

Organoid derivation and development (credit: Rene Anand and Susan McKay)

Created from pluripotent stem cells

“In central nervous system diseases, this will enable studies of either underlying genetic susceptibility or purely environmental influences, or a combination,” he said. According to genomic science, “there are up to 600 genes that give rise to autism, but we are stuck there. Mathematical correlations and statistical methods are insufficient in themselves to identify causation. You need an experimental system — you need a human brain.”

Anand’s method is proprietary and he has filed an invention disclosure with the university. He said he used techniques to differentiate pluripotent stem cells into cells that are designed to become neural tissue, components of the central nervous system or other brain regions..

High-resolution imaging of the organoid identifies functioning neurons and their signal-carrying extensions — axons and dendrites — as well as astrocytes, oligodendrocytes and microglia. The model also activates markers for cells that have the classic excitatory and inhibitory functions in the brain, and that enable chemical signals to travel throughout the structure.

It takes about 15 weeks to build a model system developed to match the 5-week-old fetal human brain. Anand and colleague Susan McKay, a research associate in biological chemistry and pharmacology, let the model continue to grow to the 12-week point, observing expected maturation changes along the way.

“If we let it go to 16 or 20 weeks, that might complete it, filling in that 1 percent of missing genes. We don’t know yet,” he said.

Models of brain disorders and injury with civilian and military uses

He and McKay have already used the platform to create brain organoid models of Alzheimer’s and Parkinson’s diseases and autism in a dish. They hope that with further development and the addition of a pumping blood supply, the model could be used for stroke therapy studies. For military purposes, the system offers a new platform for the study of Gulf War illness, traumatic brain injury, and post-traumatic stress disorder.

Anand hopes his brain model could be incorporated into the Microphysiological Systems program, a platform the Defense Advanced Research Projects Agency is developing by using engineered human tissue to mimic human physiological systems.

Support for the work came from the Marci and Bill Ingram Research Fund for Autism Spectrum Disorders and the Ohio State University Wexner Medical Center Research Fund.

Anand and McKay are co-founders of a Columbus-based start-up company, NeurXstem, to commercialize the brain organoid platform, and have applied for funding from the federal Small Business Technology Transfer program to accelerate its drug discovery applications.

Surprising results from brain and cognitive studies of a 93-year-old woman athelete

Olga Kotelko’s brain “does not look like a 90-plus-year-old” — Beckman Institute director Art Kramer

Brain scans and cognitive tests of Olga Kotelko, a 93-year-old Canadian track-and-field athlete with more than 30 world records in her age group, may support the potential beneficial effects of exercise on cognition in the “oldest old.”

In the summer of 2012, researchers at the Beckman Institute for Advanced Science and Technology at the University of Illinois invited her to visit for in-depth analysis of her brain. The resulting study, was reported in the journal Neurocase.

A retired teacher and mother of two, Kotelko started her athletic career late in life. She began with slow-pitch softball at age 65, and at 77 switched to track-and-field events, later enlisting the help of a coach. By the time of her death in 2014, she had won 750 gold medals in her age group in World Masters Athletics events, and had set new world records in the 100-meter, 200-meter, high jump, long jump, javelin, discus, shot put and hammer events.


Beckman Institute | Senior Olympian: 93-Year-Old Track Star Shows Physical & Mental Fitness

Lacking a peer group of reasonably healthy nonagenarians for comparison, the researchers decided to compare Kotelko with a group of 58 healthy, low-active women who were 60 to 78 years old.

“In our studies, we often collect data from adults who are between 60 and 80 years old, and we have trouble finding participants who are 75 to 80 and relatively healthy,” said U. of I. postdoctoral researcher Agnieszka Burzynska, who led the new analysis. As a result, very few studies have focused on the “oldest old,” she said.

“Although it is tough to generalize from a single study participant to other individuals, we felt very fortunate to have an opportunity to study the brain and cognition of such an exceptional individual,” said Beckman Institute director Art Kramer, an author of the new study.

Aging processes in the brain

The researchers wanted to determine whether Kotelko’s late-life athleticism had slowed — or perhaps even reversed — some of the processes of aging in her brain.

“In general, the brain shrinks with age,” Burzynska said. Fluid-filled spaces appear between the brain and the skull, and the ventricles enlarge, she said.

“The cortex, the outermost layer of cells where all of our thinking takes place, that also gets thinner,” she said. White matter tracts, which carry nerve signals between brain regions, tend to lose their structural and functional integrity over time. And the hippocampus, which is important to memory, usually shrinks with age, Burzynska said.

Previous studies have shown that regular aerobic exercise can enhance cognition and boost brain function in older adults, and can even increase the volume of specific brain regions like the hippocampus, Kramer said.

Surprising test results

In one long day at the lab, Kotelko submitted to an MRI brain scan, a cardiorespiratory fitness test on a treadmill, and cognitive tests. (All of the data are available at XNAT, a public repository; Kotelko and her daughter agreed to make her data public.) The women in the comparison group underwent the same tests and scans.

Afterwards, Kramer asked Olga if she was tired; she replied, “I rarely get tired.” “The decades-younger graduate students who tested her, however, looked exhausted.”

Kotelko’s brain offered some intriguing first clues about the potentially beneficial effects of her active lifestyle.

White-matter tracts remarkably intact. “Her brain did not seem to be, in general, very shrunken, and her ventricles did not seem to be enlarged,” Burzynska said. On the other hand, she had obvious signs of advanced aging in the white-matter tracts of some brain regions, Burzynska said.

“Olga had quite a lot white-matter hyperintensities, which are markers of unspecific white-matter damage,” she said. These are common in people over age 65, and tend to increase with age, she said.

As a whole, however, Kotelko’s white-matter tracts were remarkably intact — comparable to those of women decades younger, the researchers found. And the white-matter tracts in one region of her brain — the genu of the corpus callosum, which connects the right and left hemispheres at the very front of the brain — were in great shape, Burzynska said.

“Olga had the highest measure of white-matter integrity in that part of the brain, even higher than those younger females, which was very surprising,” she said. These white-matter tracts serve a region of the brain that is engaged in tasks known to decline fastest in aging, such as reasoning, planning and self-control, Burzynska said.

Better on cognitive test than other adults her own age. Kotelko performed worse on cognitive tests than the younger women, but better than other adults her own age who had been tested in an independent study. “She was quicker at responding to the cognitive tasks than other adults in their 90s,” Burzynska said. “And on memory, she was much better than they were.”

Hippocampus larger given her age. Her hippocampus was smaller than the younger participants, but larger than expected given her age, Burzynska said.

The new findings are only a very limited, first step toward calculating the effects of exercise on cognition in the oldest old, she said. “We have only one Olga and only at one time point, so it’s difficult to arrive at very solid conclusions,” Burzynska said.

“But I think it’s very exciting to see someone who is highly functioning at 93, possessing numerous world records in the athletic field and actually having very high integrity in a brain region that is very sensitive to aging. I hope it will encourage people that even as we age, our brains remain plastic. We have more and more evidence for that.”

The Robert Bosch Foundation and the National Institute on Aging at the National Institutes of Health supported this research, as did Abbott Nutrition, through the Center for Nutrition, Learning and Memory at the U. of I.

Kotelko biographer Bruce Grierson prompted researchers at the Beckman Institute to study Kotelko’s brain.


Abstract of White matter integrity, hippocampal volume, and cognitive performance of a world-famous nonagenarian track-and-field athlete

Physical activity (PA) and cardiorespiratory fitness (CRF) are associated with successful brain and cognitive aging. However, little is known about the effects of PA, CRF, and exercise on the brain in the oldest-old. Here we examined white matter (WM) integrity, measured as fractional anisotropy (FA) and WM hyperintensity (WMH) burden, and hippocampal (HIPP) volume of Olga Kotelko (1919–2014). Olga began training for competitions at age of 77 and as of June 2014 held over 30 world records in her age category in track-and-field. We found that Olga’s WMH burden was larger and the HIPP was smaller than in the reference sample (58 healthy low-active women 60–78 years old), and her FA was consistently lower in the regions overlapping with WMH. Olga’s FA in many normal-appearing WM regions, however, did not differ or was greater than in the reference sample. In particular, FA in her genu corpus callosum was higher than any FA value observed in the reference sample. We speculate that her relatively high FA may be related to both successful aging and the beneficial effects of exercise in old age. In addition, Olga had lower scores on memory, reasoning and speed tasks than the younger reference sample, but outperformed typical adults of age 90–95 on speed and memory. Together, our findings open the possibility of old-age benefits of increasing PA on WM microstructure and cognition despite age-related increase in WMH burden and HIPP shrinkage, and add to the still scarce neuroimaging data of the healthy oldest-old (>90 years) adults.

Scientists discover atomic-resolution secret of high-speed brain signaling

This illustration shows a protein complex at work in brain signaling. Its structure, which contains joined protein complexes known as SNARE (shown in blue, red, and green) and synaptotagmin-1 (orange), is shown in the foreground. This complex is responsible for the calcium-triggered release of neurotransmitters from our brain’s nerve cells in a process called synaptic vesicle fusion. The background image shows electrical signals traveling through a neuron. (credit: SLAC National Accelerator Laboratory)

Stanford School of Medicine scientists have mapped the 3D atomic structure of a two-part protein complex that controls the release of signaling chemicals, called neurotransmitters, from brain cells in less than one-thousandth of a second.

The experiments were reported today (August 17) in the journal Nature. Performed at the Linac Coherent Light Source (LCLS) X-ray laser at the Department of Energy’s SLAC National Accelerator Laboratory, the experiments were built on decades of previous research at Stanford University, Stanford School of Medicine, and SLAC.

“This is a very important, exciting advance that may open up possibilities for targeting new drugs to control neurotransmitter release,” said Axel Brunger, the study’s principal investigator — a professor at Stanford School of Medicine and SLAC and a Howard Hughes Medical Institute investigator. “Many mental disorders, including depression, schizophrenia and anxiety, affect neurotransmitter systems.”

The two protein parts are known as neuronal SNAREs and synaptotagmin-1. “Both parts of this protein complex are essential,” Brunger said, “but until now it was unclear how its two pieces fit and work together.” Earlier X-ray studies, including experiments at SLAC’s Stanford Synchrotron Radiation Lightsource (SSRL) nearly two decades ago, shed light on the structure of the SNARE complex, a helical protein bundle found in yeasts and mammals.

SNAREs play a key role in the brain’s chemical signaling by joining, or “fusing,” little packets of neurotransmitters to the outer edges of neurons, where they are released and then dock with chemical receptors in another neuron to trigger a response.

Explains rapid triggering of brain signaling

In this latest research, the scientists found that when the SNAREs and synaptotagmin-1 join up, they act as an amplifier for a slight increase in calcium concentration, triggering a gunshot-like release of neurotransmitters from one neuron to another. They also learned that the proteins join together before they arrive at a neuron’s membrane, which helps to explain how they trigger brain signaling so rapidly.

The team speculates that several of the joined protein complexes may group together and simultaneously interact with the same vesicle to efficiently trigger neurotransmitter release, an exciting area for further studies. “The structure of the SNARE-synaptotagmin-1 complex is a milestone that the field has awaited for a long time, and it sets the framework for a better understanding of the system,” said James Rothman, a professor at Yale University who discovered the SNARE proteins and shared the 2013 Nobel Prize in Physiology or Medicine.

Thomas C. Südhof, a professor at the Stanford School of Medicine and Howard Hughes Medical Institute investigator who shared that 2013 Nobel Prize with Rothman, discovered synaptotagmin-1 and showed that it plays an important role as a calcium sensor and calcium-dependent trigger for neurotransmitter release.

“The new structure has identified unanticipated interfaces between synaptotagmin-1 and the neuronal SNARE complex that change how we think about their interaction by revealing, in atomic detail, exactly where they bind together,” Südhof said. “This is a new concept that goes much beyond previous general models of how synaptotagmin-1 functions.”

Using crystals, robotics and X-rays to advance neuroscience

To study the joined protein structure, researchers in Brunger’s laboratory at the Stanford School of Medicine found a way to grow crystals of the complex. They used a robotic system developed at SSRL to study the crystals at SLAC’s LCLS, an X-ray laser that is one of the brightest sources of X-rays on the planet. The researchers combined and analyzed hundreds of X-ray images from about 150 protein crystals to reveal the atomic-scale details of the joined structure.

According to SSRL’s Aina Cohen, who oversaw the development of the highly automated platform used for the neuroscience experiment, “This experiment was the first to use this robotic platform at LCLS to determine a previously unsolved structure of a large, challenging multi-protein complex.” The study was also supported by X-ray experiments at SSRL and at Argonne National Laboratory’s Advanced Photon Source.

Brunger said future studies will explore other protein interactions relevant to neurotransmitter release. “What we studied is only a subset,” he said. “There are many other factors interacting with this system and we want to know what these look like.”

Other contributing scientists were from Lawrence Berkeley National Laboratory. The research was supported by the Howard Hughes Medical Institute, the National Institutes of Health (NIH), the DOE Office Science, and the SSRL Structural Molecular Biology Program, which is also supported by the DOE Office of Science and the NIH’s National Institute of General Medical Sciences.


Abstract of Architecture of the synaptotagmin–SNARE machinery for neuronal exocytosis

Synaptotagmin-1 and neuronal SNARE proteins have central roles in evoked synchronous neurotransmitter release; however, it is unknown how they cooperate to trigger synaptic vesicle fusion. Here we report atomic-resolution crystal structures of Ca2+- and Mg2+-bound complexes between synaptotagmin-1 and the neuronal SNARE complex, one of which was determined with diffraction data from an X-ray free-electron laser, leading to an atomic-resolution structure with accurate rotamer assignments for many side chains. The structures reveal several interfaces, including a large, specific, Ca2+-independent and conserved interface. Tests of this interface by mutagenesis suggest that it is essential for Ca2+-triggered neurotransmitter release in mouse hippocampal neuronal synapses and for Ca2+-triggered vesicle fusion in a reconstituted system. We propose that this interface forms before Ca2+ triggering, moves en bloc as Ca2+ influx promotes the interactions between synaptotagmin-1 and the plasma membrane, and consequently remodels the membrane to promote fusion, possibly in conjunction with other interfaces.

Koko the gorilla shows signs of early speech


UW-Madison Campus Connection | Koko the Gorilla Coughs

Koko the gorilla has learned vocal and breathing behaviors that may change the perception that humans are the only primates with the capacity for speech.

In 2010, Marcus Perlman started research work at The Gorilla Foundation in California, where Koko has spent more than 40 years living immersed with humans — interacting for many hours each day with psychologist Penny Patterson and biologist Ron Cohn.

“I went there with the idea of studying Koko’s gestures, but as I got into watching videos of her, I saw her performing all these amazing vocal behaviors,” says Perlman, now a postdoctoral researcher at the University of Wisconsin-Madison.

“Decades ago, in the 1930s and ’40s, a couple of husband-and-wife teams of psychologists tried to raise chimpanzees as much as possible like human children and teach them to speak. Their efforts were deemed a total failure,” Perlman says. “Since then, there is an idea that apes are not able to voluntarily control their vocalizations or even their breathing.”

Instead, the thinking went, the calls apes make pop out almost reflexively in response to their environment — the appearance of a dangerous snake, for example. And the particular vocal repertoire of each ape species was thought to be fixed, unable to learn new vocal and breathing-related behaviors.


UW-Madison Campus Connection | Koko the gorilla blows her nose

These limits fit a theory on the evolution of language. “This idea says there’s nothing that apes can do that is remotely similar to speech,” Perlman says. “And, therefore, speech essentially evolved — completely new — along the human line since our last common ancestor with chimpanzees.”

Learned vocalization and breathing

However, in a study published online in July in the journal Animal Cognition, Perlman and collaborator Nathaniel Clark of the University of California, Santa Cruz, sifted 71 hours of video of Koko interacting with Patterson and Cohn and others, and found repeated examples of Koko performing nine different, voluntary behaviors that required control over her vocalization and breathing. These were learned behaviors, not part of the typical gorilla repertoire.

Among other things, Perlman and Clark watched Koko blow a raspberry (or blow into her hand) when she wanted a treat, blow her nose into a tissue, play wind instruments, huff moisture onto a pair of glasses before wiping them with a cloth and mimic phone conversations by chattering wordlessly into a telephone cradled between her ear and the crook of an elbow.


UW-Madison Campus Connection | Koko the gorilla plays an instrument

“She doesn’t produce a pretty, periodic sound when she performs these behaviors, like we do when we speak,” Perlman says. “But she can control her larynx enough to produce a controlled grunting sound.” Koko can also cough on command — impressive for a gorilla because it requires her to close off her larynx.

These behaviors are all learned, Perlman figures, and the result of living with humans since Koko was just six months old.

This suggests that some of the evolutionary groundwork for the human ability to speak was in place at least by the time of our last common ancestor with gorillas, estimated to be around 10 million years ago.

“Koko bridges a gap,” Perlman says. “She shows the potential under the right environmental conditions for apes to develop quite a bit of flexible control over their vocal tract. It’s not as fine as human control, but it is certainly control.”

Orangutans have also demonstrated some impressive vocal and breathing-related behavior, according to Perlman, indicating the whole great ape family may share the abilities Koko has learned to tap.


Abstract of Learned vocal and breathing behavior in an enculturated gorilla

We describe the repertoire of learned vocal and breathing-related behaviors (VBBs) performed by the enculturated gorilla Koko. We examined a large video corpus of Koko and observed 439 VBBs spread across 161 bouts. Our analysis shows that Koko exercises voluntary control over the performance of nine distinctive VBBs, which involve variable coordination of her breathing, larynx, and supralaryngeal articulators like the tongue and lips. Each of these behaviors is performed in the context of particular manual action routines and gestures. Based on these and other findings, we suggest that vocal learning and the ability to exercise volitional control over vocalization, particularly in a multimodal context, might have figured relatively early into the evolution of language, with some rudimentary capacity in place at the time of our last common ancestor with great apes.

Newly discovered brain network recognizes what’s new, what’s familiar

The Parietal Memory Network, a newly discovered memory and learning network shows consistent patterns of activation and deactivation in three distinct regions of the parietal cortex in the brain’s left hemisphere — the precuneus, the mid-cingulate cortex and the dorsal angular gyrus (credit: Image adapted from Creative Commons original by Patrick J. Lynch, medical illustrator; C. Carl Jaffe, MD, cardiologist)

New research from Washington University in St. Louis has identified a novel learning and memory brain network, dubbed the Parietal Memory Network (PMN), that processes incoming information based on whether it’s something we’ve experienced previously or appears to be new and unknown — helping us recognize, for instance, whether a face is that of a familiar friend or a complete stranger.

The study pulls together evidence from multiple neuroimaging studies and methods to demonstrate the existence of this previously unknown and distinct functional brain network, one that appears to have broad involvement in human memory processing.

“When an individual sees a novel stimulus, this network shows a marked decrease in activity,” said Adrian Gilmore, first author of the study and a fifth-year psychology doctoral student at Washington University. When an individual sees a familiar stimulus, this network shows a marked increase in activity.”

The new memory and learning network shows consistent patterns of activation and deactivation in three distinct regions of the parietal cortex in the brain’s left hemisphere — the precuneus, the mid-cingulate cortex, and the dorsal angular gyrus.

Activity within the PMN during the processing of incoming information (encoding) can be used to predict how well that information will be stored in memory and later made available for successful retrieval.

Researchers identified interesting characteristics of the PMN by analyzing data from a range of previously published neuroimaging studies. Using converging bits of evidence from dozens of fMRI brain experiments, their study shows how activity in the PMN changes during the completion of specific mental tasks and how the regions interact during resting states when the brain is involved in no particular activity or mental challenge.

This study builds on research by Marcus Raichle, MD, the Alan A. and Edith L. Wolff Distinguished Professor of Medicine, and other neuroscience researchers at Washington University, which established the existence of another functional brain network that remains surprisingly active when the brain is not involved in a specific activity, a system known as the Default Mode Network.

Like the Default Mode Network, key regions of the PMN were shown to hum in a similar unison while the brain is in relative periods of rest. And while key regions of the PMN are located close to the Default Mode Network, the PMN appears to be its own distinct and separate functional network, preliminary findings suggest.

A broad role in learning and recall

Another characteristic that sets the PMN apart from other functional networks is that its activity patterns remains consistent regardless of the type of mental challenge it is processing.

Many regions of the cortex jump into action only during the processing of a very specific task, such as learning a list of words, but remain relatively inactive during very similar tasks, such as learning a group of faces, The PMN, on the other hand, exhibits activity across a wide range of mental tasks, with levels rising and falling based on how much a task’s novelty or familiarity captures our attention.

“It seems like the amount of change relies heavily on how much a given stimulus captures our attention,” Gilmore said. “If something really stands out as old or new, you see much larger changes in the network’s activity than if it doesn’t stand out as much.”

The consistency of these patterns across various types of processing tasks suggests that the PMN plays a broad role in many different learning and recall processes, the research team suggests.

“A really cool feature of the PMN is that it seems to show its response patterns regardless of what you’re doing,” Gilmore said. “The PMN doesn’t seem to care what it is that you’re trying to do. It deactivates when we encounter something new, and activates when we encounter something that we’ve seen before. This makes it a really promising target for future research in areas such as education or Alzheimer’s research, where we want to foster or improve memory performance broadly, rather than focusing on specific tasks.”

The study is forthcoming in the September issue of the journal Trends in Cognitive Sciences.


Abstract of A parietal memory network revealed by multiple MRI methods

The manner by which the human brain learns and recognizes stimuli is a matter of ongoing investigation. Through examination of meta-analyses of task-based functional MRI and resting state functional connectivity MRI, we identified a novel network strongly related to learning and memory. Activity within this network at encoding predicts subsequent item memory, and at retrieval differs for recognized and unrecognized items. The direction of activity flips as a function of recent history: from deactivation for novel stimuli to activation for stimuli that are familiar due to recent exposure. We term this network the ‘parietal memory network’ (PMN) to reflect its broad involvement in human memory processing. We provide a preliminary framework for understanding the key functional properties of the network.