Printing molten glass (credit: John Klein et al./3D Printing and Additive Manufacturing)
An additive-manufacturing glass-printing process called G3DP (Glass 3D Printing) has been developed by researchers in the Mediated Matter Group at the MIT Media Lab in collaboration with the Glass Lab at MIT.
Glass-printing platform. (1) Crucible, (2) heating elements, (3) nozzle (4) thermocouple, (5) removable feed access lid. (credit: John Klein et al./3D Printing and Additive Manufacturing)
The platform is based on a dual heated-chamber concept. The upper chamber acts as a Kiln Cartridge (a thermally insulated heater) operating at about 1900°F to melt the glass, while the lower chamber serves to anneal (form) the structures. The molten material gets funneled through an alumina-zircon-silica nozzle, which extrudes the material onto a build platform, where it cools and hardens. By tuning the form, transparency, and color variation, the process can drive, limit, or control light transmission, reflection and refraction in the final material.
Detail of a colored printed object (credit: John Klein et al./3D Printing and Additive Manufacturing)
The G3DP project was created in collaboration between the Mediated Matter group at the MIT Media Lab, the Mechanical Engineering Department, the MIT Glass Lab, and the Wyss Institute.
A selection of Glass pieces will appear in an exhibition at Cooper Hewitt, Smithsonian Design Museum, New York City in 2016. An “Additive Manufacturing of Optically Transparent Glass” patent application was filed on April 25, 2014.
How far can science push the limits of human life?
That was the theme of a Crosstalks webcast today, “The dilemma of human enhancement,” available for download.
The show addressed questions like “Can we prevent people from dying? With implants, nanotechnology, artificial body parts and smart drugs we can enhance human physiology beyond our current limitations. But should we really pursue this? And can we do it responsibly?”
Participants
Maria Konovalenko, Molecular biophysicist, Program Coordinator for the Science for Life Extension Foundation.
Zoltan Istvan, American writer, philosopher, futurist and 2016 presidential candidate for the newly formed Trans humanist Party.
Gustav Nilsonne, (MD, PhD) researcher in cognitive neuroscience at Stockholm University
Karim Jebari, Ph.D in analytic philosophy at KTH Royal Institute of Technology and Post Doc at the Institute for Futures Studies
Mats Nilsson, Lecturer and researcher at KTH Royal Institute of Technology.
Crosstalks is an international academic talk show broadcast once a month as a joint venture produced by Stockholm University and KTH Royal Institute of Technology, moderated by journalist Johanna Koljonen.
Researchers are removing a greenhouse gas from the air while generating carbon nanofibers like these (credit: Stuart Licht, Ph.D)
A research team of chemists at George Washington University has developed a technology that can economically convert atmospheric CO2 directly from the air into highly valued carbon nanofibers for industrial and consumer products — converting an anthropogenic greenhouse gas from a climate change problem to a valuable commodity, they say.
“Such nanofibers are used to make strong carbon composites, such as those used in the Boeing Dreamliner, as well as in high-end sports equipment, wind turbine blades and a host of other products,” said Stuart Licht, Ph.D., team leader.
Previously, the researchers had made fertilizer and cement without emitting CO2, which they reported. Now, the team, which includes postdoctoral fellow Jiawen Ren, Ph.D., and graduate student Jessica Stuart, says their research could shift CO2 from a global-warming problem to a feed stock for the manufacture of in-demand carbon nanofibers.
Licht calls his approach “diamonds from the sky.” That refers to carbon being the material that diamonds are made of, and also hints at the high value of the products, such as carbon nanofibers.
A low-energy, high-efficiency process
The researchers claim this low-energy process can be run efficiently, using only a few volts of electricity, sunlight, and a whole lot of carbon dioxide. The system uses electrolytic syntheses to make the nanofibers. Here’s how:
To power the syntheses, heat and electricity are produced through a hybrid and extremely efficient concentrating solar-energy system. The system focuses the sun’s rays on a photovoltaic solar cell to generate electricity and on a second system to generate heat and thermal energy, which raises the temperature of an electrolytic cell.
CO2 is broken down in a high-temperature electrolytic bath of molten carbonates at 1,380 degrees F (750 degrees C).
Atmospheric air is added to an electrolytic cell.
The CO2 dissolves when subjected to the heat and direct current through electrodes of nickel and steel.
The carbon nanofibers build up on the steel electrode, where they can be removed.
Licht estimates electrical energy costs of this “solar thermal electrochemical process” to be around $1,000 per ton of carbon nanofiber product. That means the cost of running the system is hundreds of times less than the value of product output, he says.
Decreasing CO2 to pre-industrial-revolution levels
“We calculate that with a physical area less than 10 percent the size of the Sahara Desert, our process could remove enough CO2 to decrease atmospheric levels to those of the pre-industrial revolution within 10 years,” he says.
At this time, the system is experimental. Licht’s biggest challenge will be to ramp up the process and gain experience to make consistently sized nanofibers. “We are scaling up quickly,” he adds, “and soon should be in range of making tens of grams of nanofibers an hour.”
Licht explains that one advance the group has recently achieved is the ability to synthesize carbon fibers using even less energy than when the process was initially developed. “Carbon nanofiber growth can occur at less than 1 volt at 750 degrees C, which for example is much less than the 3–5 volts used in the 1,000 degree C industrial formation of aluminum,” he says.
No published details on overall energy costs and efficiency are yet available (to be updated).
Abstract of New approach to carbon dioxide utilization: The carbon molten air battery
As the levels of carbon dioxide (CO2) increase in the Earth’s atmosphere, the effects on climate change become increasingly apparent. As the demand to reduce our dependence on fossils fuels and lower our carbon emissions increases, a transition to renewable energy sources is necessary. Cost effective large-scale electrical energy storage must be established for renewable energy to become a sustainable option for the future. We’ve previously shown that carbon dioxide can be captured directly from the air at solar efficiencies as high as 50%, and that carbon dioxide associated with cement formation and the production of other commodities can be electrochemically avoided in the STEP process.1-3
The carbon molten air battery, presented by our group in late 2013, is attractive due to its scalability, location flexibility, and construction from readily available resources, providing a battery that can be useful for large scale applications, such as the storage of renewable electricity.4
Uncommonly, the carbon molten air battery can utilize carbon dioxide directly from the air:
(1) charging: CO2(g) -> C(solid) + O2(g)
(2) discharging: C(solid) + O2(g) -> CO2(g)
More specifically, in a molten carbonate electrolyte containing added oxide, such as lithium carbonate with lithium oxide, the 4 electron charging reaction eq. 1 approaches 100% faradic efficiency and can be described as the following two equations:
(1a) O2-(dissolved) + CO2(g) -> CO32-(molten)
(1b) CO32-(molten) -> C(solid) + O2(g) + O2-(dissolved)
Thus, powered by carbon formed directly from the CO2 in our earth’s atmosphere, the carbon molten air battery is a viable system to provide large-scale energy storage.
1S. Licht, ”Efficient Solar-Driven Synthesis, Carbon Capture, and Desalinization, STEP: Solar Thermal Electrochemical Production of Fuels, Metals, Bleach,” Advanced Materials, 47, 5592 (2011). 2S. Licht, H. Wu, C. Hettige, B. Wang, J. Lau, J. Asercion, J. Stuart “STEP Cement: Solar Thermal Electrochemical Production of CaO without CO2 emission,” Chemical Communications, 48, 6019 (2012). 3S. Licht, B. Cui, B. Wang, F.-F. Li, J. Lau, S. Liu,” Ammonia synthesis by N2 and steam electrolysis in molten hydroxide suspensions of nanoscale Fe2O3,” Science, 345, 637 (2014). 4S. Licht, B. Cui, J. Stuart, B. Wang, J. Lau, “Molten Air Batteries – A new, highest energy class of rechargeable batteries,” Energy & Environmental Science, 6, 3646 (2013).
A cross-section of a rat’s brain, showing where the key decisions are made about which is a new memory being made and which is old and familiar (credit: Johns Hopkins University)
Know that feeling when you see someone and realize you may know them (or not)? Now we know actually where in the brain that happens — the CA3 region of the hippocampus, the seat of memory, thanks to Johns Hopkins University neuroscientists.
“You see a familiar face and say to yourself, ‘I think I’ve seen that face.’ But is this someone I met five years ago, maybe with thinner hair or different glasses — or is it someone else entirely?” said James J. Knierim, a professor of neuroscience at the university’s Zanvyl Krieger Mind/Brain Institute who led the research, described in the current issue of the journal Neuron.
Is that you under that beard? Oops, excuse me. “That’s one of the biggest problems our memory system has to solve, Kneirim said. “The final job of the CA3 region is to make the decision: Is it the same or is it different? Usually you are correct in remembering that this person is a slightly different version of the person you met years ago.
“But when you are wrong, and it embarrassingly turns out that this is a complete stranger, you want to create a memory of this new person that is absolutely distinct from the memory of your familiar friend, so you don’t make the mistake again.”
Would you like chocolate sprinkles on that cheese? Knierim and associates implanted electrodes in the hippocampus of rats and monitored them as they got to know an environment and as that environment changed. They trained the rats to run around a track, eating chocolate sprinkles. The track floor had four different textures — sandpaper, carpet padding, duct tape and a rubber mat.
The rat could see, feel and smell the differences in the textures. Meanwhile, a black curtain surrounding the track had various objects attached to it. Over 10 days, the rats built mental maps of that environment.
Messing with rat minds for fun and science. Then the experimenters changed things up. They rotated the track counter-clockwise, while rotating the curtain clockwise, creating a perceptual mismatch in the rats’ minds. The effect was similar, Knierim said, to if you opened the door of your home and all of your pictures were hanging on different walls and your furniture had been moved.
“Would you recognize it as your home or think you are lost?” he said. “It’s a very disorienting experience and a very uncomfortable feeling.”
Even when the perceptual mismatch between the track and curtain was small, the “pattern-separating” part of CA3 almost completely changed its activity patterns, creating a new memory of the altered environment. But the “pattern-completing” part of CA3 tended to retrieve a similar activity pattern used to encode the original memory, even when the perceptual mismatch increased.
The findings, which validate models about how memory works, could help explain what goes wrong with memory in diseases like Alzheimer’s and could help to preserve people’s memories as they age.
This research was supported by the National Institutes of Health grants and by the Johns Hopkins University Brain Sciences Institute.
Abstract of Neural population evidence of functional heterogeneity along the CA3 transverse axis: Pattern completion vs. pattern separation
Classical theories of associative memory model CA3 as a homogeneous attractor network because of its strong recurrent circuitry. However, anatomical gradients suggest a functional diversity along the CA3 transverse axis. We examined the neural population coherence along this axis, when the local and global spatial reference frames were put in conflict with each other. Proximal CA3 (near the dentate gyrus), where the recurrent collaterals are the weakest, showed degraded representations, similar to the pattern separation shown by the dentate gyrus. Distal CA3 (near CA2), where the recurrent collaterals are the strongest, maintained coherent representations in the conflict situation, resembling the classic attractor network system. CA2 also maintained coherent representations. This dissociation between proximal and distal CA3 provides strong evidence that the recurrent collateral system underlies the associative network functions of CA3, with a separate role of proximal CA3 in pattern separation.
Progressively zoomed-in images of graphene nanoribbons grown on germanium (gray area). The ribbons automatically align perpendicularly and naturally grow in “armchair” edge configuration. (credit: Arnold Research Group and Guisinger Research Group)
Graphene, an atom-thick material with extraordinary properties, normally functions as a conductor of electricity, but not as a semiconductor. This advance is significant because it could allow manufacturers to easily use graphene nanoribbons in hybrid integrated circuits, which promise to significantly boost the performance of next-generation electronic devices.
The technology could also have specific uses in high-performance industrial and military applications, such as sensors that detect specific chemical and biological species and photonic devices that manipulate light. More importantly, the technique promises to be easily scaled up for mass production and is compatible with the prevailing fab infrastructure used in semiconductor processing.
The development was announced in an open-access paper published Aug. 10 in the journal Nature Communications by Michael Arnold, an associate professor of materials science and engineering at UW-Madison, Ph.D. student Robert Jacobberger, and their collaborators.
How to create ultra-thin “armchair” graphene nanoribbons
Armchair shape in graphene sheet (credit: Rajaram Narayanan/Jacobs School of Engineering/UC San Diego)
“Graphene nanoribbons that can be grown directly on the surface of a semiconductor like germanium are more compatible with planar processing that’s used in the semiconductor industry, and so there would be less of a barrier to integrating these really excellent materials into electronics in the future,” Arnold says.
Graphene, a sheet of carbon atoms that is only one atom in thickness, conducts electricity and dissipates heat much more efficiently than silicon, the material most commonly found in today’s computer chips.
But to exploit graphene’s remarkable electronic properties in semiconductor applications where current must be switched on and off, graphene nanoribbons need to be less than 10 nanometers wide. In addition, the nanoribbons must have smooth, well-defined “armchair” edges in which the carbon-carbon bonds are parallel to the length of the ribbon.
Researchers have typically fabricated nanoribbons by using lithographic techniques to cut larger sheets of graphene into ribbons. However, this “top-down” fabrication approach lacks precision and produces nanoribbons with very rough edges.
Another strategy for making nanoribbons is to use a “bottom-up” approach such as surface-assisted organic synthesis, where molecular precursors react on a surface to polymerize nanoribbons. Arnold says surface-assisted synthesis can produce beautiful nanoribbons with precise, smooth edges, but this method only works on metal substrates and the resulting nanoribbons are thus far too short for use in electronics.
Chemical vapor deposition process breakthrough
To overcome these hurdles, the UW-Madison researchers pioneered a bottom-up technique in which they grow ultra-narrow nanoribbons with smooth, straight edges directly on germanium wafers using a process called chemical vapor deposition. In this process, the researchers start with methane, which adsorbs to the germanium surface and decomposes to form various hydrocarbons. These hydrocarbons react with each other on the surface, where they form graphene.
Arnold’s team made its discovery when it explored dramatically slowing the growth rate of the graphene crystals by decreasing the amount of methane in the chemical vapor deposition chamber. They found that at a very slow growth rate, the graphene crystals naturally grow into long nanoribbons on a specific crystal facet of germanium. By simply controlling the growth rate and growth time, the researchers can easily tune the nanoribbon width be to less than 10 nanometers.
“What we’ve discovered is that when graphene grows on germanium, it naturally forms nanoribbons with these very smooth, armchair edges,” Arnold says. “The widths can be very, very narrow and the lengths of the ribbons can be very long, so all the desirable features we want in graphene nanoribbons are happening automatically with this technique.”
The nanoribbons produced with this technique start nucleating, or growing, at seemingly random spots on the germanium and are oriented in two different directions on the surface. Arnold says the team’s future work will include controlling where the ribbons start growing and aligning them all in the same direction.
Abstract of Direct oriented growth of armchair graphene nanoribbons on germanium
Graphene can be transformed from a semimetal into a semiconductor if it is confined into nanoribbons narrower than 10 nm with controlled crystallographic orientation and well-defined armchair edges. However, the scalable synthesis of nanoribbons with this precision directly on insulating or semiconducting substrates has not been possible. Here we demonstrate the synthesis of graphene nanoribbons on Ge(001) via chemical vapour deposition. The nanoribbons are self-aligning 3° from the Ge110 directions, are self-defining with predominantly smooth armchair edges, and have tunable width to <10 nm and aspect ratio to >70. In order to realize highly anisotropic ribbons, it is critical to operate in a regime in which the growth rate in the width direction is especially slow, <5 nm h−1. This directional and anisotropic growth enables nanoribbon fabrication directly on conventional semiconductor wafer platforms and, therefore, promises to allow the integration of nanoribbons into future hybrid integrated circuits.
American Chemical Society | A simple, cheap test for Ebola, dengue and yellow fever
MIT researchers have developed a low-cost, paper-based device that changes color, depending on whether the patient has Ebola, dengue, or yellow fever. The test is designed to facilitate diagnosis in remote, low-resource settings, takes minutes, and does not need electricity to read out results.
Standard approaches for diagnosing viral infections require technical expertise and expensive equipment, says MIT researcher Kimberly Hamad-Schifferli, Ph.D. “Typically, people perform PCR and ELISA, which are highly accurate, but they need a controlled lab environment.” Polymerase chain reaction (PCR) and enzyme-linked immunosorbent assay (ELISA) are bioassays that detect pathogens directly or indirectly, respectively.
Color-changing paper devices that work like over-the-counter pregnancy tests offer a possible solution. “These are not meant to replace PCR and ELISA [lab tests], because we can’t match their accuracy,” Hamad-Schifferli says. “This is a complementary technique for places with no running water or electricity.”
When a fever strikes in a developing area, the immediate concern may be: Is it the common flu or something much worse that requires quarantine? A paper-based diagnostic test that distinguishes between yellow fever virus, Ebola, and dengue, using different colored nanoparticles tagged with virus-specific antibodies (credit: Chunwan Yen)
The researchers attached red, green, or orange nanoparticles to antibodies that specifically bind to proteins from the organisms that cause yellow fever, Ebola, or dengue, respectively. They introduced the antibody-tagged nanoparticles onto the end of a small strip of paper. In the paper’s middle, the researchers affixed “capture” antibodies to three test lines at different locations, one for each disease.
To test the device, the researchers spiked blood samples with the viral proteins and then dropped small volumes onto the end of the paper device. If a sample contained dengue proteins, for example, the dengue antibody, which was attached to a green nanoparticle, latched onto one of those proteins. This complex then migrated through the paper, until reaching the dengue fever test line, where a second dengue-specific antibody captured it. That stopped the complex from going farther down the strip, and the test line turned green. When the researchers tested samples with proteins from Ebola or yellow fever, the antibody complexes migrated to different places on the strip and turned red or orange.
“Using other laboratory tests, we know the typical concentrations of yellow fever or dengue virus in patient blood. We know that the paper-based test is sensitive enough to detect concentrations well below that range,” says Hamad-Schifferli. “It’s hard to get that information for Ebola, but we can detect down to tens of nanograms per milliliter — that’s pretty sensitive and might work with patient samples.”
Next, the researchers plan to produce kits for free distribution. “We’re giving people the components so they can build the devices themselves,” says Hamad-Schifferli. The kits will provide a flexible platform for making paper devices that can detect any disease of interest, given the right antibody. “We are trying to move this into the field and put it in the hands of the people who need it,” she says.
American Chemical Society | Paper-based test can quickly diagnose Ebola in remote areas (press conference)
Abstract of Multicolored silver nanoparticles for multiplexed disease diagnostics: Distinguishing dengue, Yellow Fever, and Ebola viruses
Rapid point-of-care (POC) diagnostic devices are needed for field-forward screening of severe acute systemic febrile illnesses. Multiplexed rapid lateral flow diagnostics have the potential to distinguish among multiple pathogens, thereby facilitating diagnosis and improving patient care. Here, we present a platform for multiplexed pathogen detection using multi-colored prism-shaped silver nanoparticles (AgNPs). We exploit the size-dependent optical properties of Ag NPs to construct a multiplexed paperfluidic lateral flow POC sensor. AgNPs of different sizes were conjugated to antibodies that bind to specific biomarkers. Red AgNPs were conjugated to antibodies that could recognize the glycoprotein for Ebola virus, green AgNPs to those that could recognize nonstructural protein 1 for dengue virus, and orange AgNPs for non structural protein 1 for yellow fever virus. Presence of each of the biomarkers resulted in a different colored band on the test line in the lateral flow test. Thus, we were able to use NP color to distinguish among three pathogens that cause a febrile illness. Because positive test lines can be imaged by eye or a mobile phone camera, the approach is adaptable to low-resource, widely deployable settings. This design requires no external excitation source and permits multiplexed analysis in a single channel, facilitating integration and manufacturing.
A volunteer calibrating the exoskeleton brain-computer interface (credit: (c) Korea University/TU Berlin)
Scientists at Korea University and TU Berlin have developed a brain-computer interface (BCI) for a lower limb exoskeleton used for gait assistance by decoding specific signals from the user’s brain.
LEDs flickering at five different frequencies code for five different commands (credit: Korea University/TU Berlin)
Using an electroencephalogram (EEG) cap, the system allows users to move forward, turn left and right, sit, and stand, simply by staring at one of five flickering light emitting diodes (LEDs).
Each of the five LEDs flickers at a different frequency, corresponding to five types of movements. When the user focuses their attention on a specific LED, the flickering light generates a visual evoked potential in the EEG signal, which is then identified by a computer and used to control the exoskeleton to move in the appropriate manner (forward, left, right, stand, sit).
Korea University/TU Berlin | A brain-computer interface for controlling an exoskeleton
The results are published in an open-access paper today (August 18) in the Journal of Neural Engineering.
“A key problem is designing such a system is that exoskeletons create lots of electrical ‘noise,’” explains Klaus Muller, an author of the paper. “The EEG signal [from the brain] gets buried under all this noise, but our system is able to separate out the EEG signal and the frequency of the flickering LED within this signal.”
“People with amyotrophic lateral sclerosis (ALS) (motor neuron disease) or spinal cord injuries face difficulties communicating or using their limbs,” he said. This system could let them walk again, he believes. He suggests that the control system could be added on to existing BCI devices, such as Open BCI devices.
In experiments with 11 volunteers, it only took them a few minutes to be trained in operating the system. Because of the flickering LEDs, they were carefully screened for epilepsy prior to taking part in the research. The researchers are now working to reduce the “visual fatigue” associated with longer-term use.
Abstract of A lower limb exoskeleton control system based on steady state visual evoked potentials
Objective. We have developed an asynchronous brain–machine interface (BMI)-based lower limb exoskeleton control system based on steady-state visual evoked potentials (SSVEPs).
Approach. By decoding electroencephalography signals in real-time, users are able to walk forward, turn right, turn left, sit, and stand while wearing the exoskeleton. SSVEP stimulation is implemented with a visual stimulation unit, consisting of five light emitting diodes fixed to the exoskeleton. A canonical correlation analysis (CCA) method for the extraction of frequency information associated with the SSVEP was used in combination with k-nearest neighbors.
Main results. Overall, 11 healthy subjects participated in the experiment to evaluate performance. To achieve the best classification, CCA was first calibrated in an offline experiment. In the subsequent online experiment, our results exhibit accuracies of 91.3 ± 5.73%, a response time of 3.28 ± 1.82 s, an information transfer rate of 32.9 ± 9.13 bits/min, and a completion time of 1100 ± 154.92 s for the experimental parcour studied.
Significance. The ability to achieve such high quality BMI control indicates that an SSVEP-based lower limb exoskeleton for gait assistance is becoming feasible.
This image of the lab-grown brain is labeled to show identifiable structures: the cerebral hemisphere, the optic stalk, and the cephalic flexure, a bend in the mid-brain region, all characteristic of the human fetal brain (credit: The Ohio State University)
Scientists at The Ohio State University have developed a miniature human brain in a dish with the equivalent brain maturity of a five-week-old fetus.
The lab-grown brain, about the size of a pencil eraser, has an identifiable structure and contains 99 percent of the genes present in the human fetal brain. Such a system will enable ethical and more rapid, accurate testing of experimental drugs before the clinical trial stage. It is intended to advance studies of genetic and environmental causes of central nervous system disorders.
“It not only looks like the developing brain, its diverse cell types express nearly all genes like a brain,” Anand said. “The power of this brain model bodes very well for human health because it gives us better and more relevant options to test and develop therapeutics other than [using] rodents.”
The main thing missing in this model is a vascular system. But what is there — a spinal cord, all major regions of the brain, multiple cell types, signaling circuitry and even a retina — has the potential to dramatically accelerate the pace of neuroscience research, said Anand, who is also a professor of neuroscience.
Organoid derivation and development (credit: Rene Anand and Susan McKay)
Created from pluripotent stem cells
“In central nervous system diseases, this will enable studies of either underlying genetic susceptibility or purely environmental influences, or a combination,” he said. According to genomic science, “there are up to 600 genes that give rise to autism, but we are stuck there. Mathematical correlations and statistical methods are insufficient in themselves to identify causation. You need an experimental system — you need a human brain.”
Anand’s method is proprietary and he has filed an invention disclosure with the university. He said he used techniques to differentiate pluripotent stem cells into cells that are designed to become neural tissue, components of the central nervous system or other brain regions..
High-resolution imaging of the organoid identifies functioning neurons and their signal-carrying extensions — axons and dendrites — as well as astrocytes, oligodendrocytes and microglia. The model also activates markers for cells that have the classic excitatory and inhibitory functions in the brain, and that enable chemical signals to travel throughout the structure.
It takes about 15 weeks to build a model system developed to match the 5-week-old fetal human brain. Anand and colleague Susan McKay, a research associate in biological chemistry and pharmacology, let the model continue to grow to the 12-week point, observing expected maturation changes along the way.
“If we let it go to 16 or 20 weeks, that might complete it, filling in that 1 percent of missing genes. We don’t know yet,” he said.
Models of brain disorders and injury with civilian and military uses
He and McKay have already used the platform to create brain organoid models of Alzheimer’s and Parkinson’s diseases and autism in a dish. They hope that with further development and the addition of a pumping blood supply, the model could be used for stroke therapy studies. For military purposes, the system offers a new platform for the study of Gulf War illness, traumatic brain injury, and post-traumatic stress disorder.
Support for the work came from the Marci and Bill Ingram Research Fund for Autism Spectrum Disorders and the Ohio State University Wexner Medical Center Research Fund.
Anand and McKay are co-founders of a Columbus-based start-up company, NeurXstem, to commercialize the brain organoid platform, and have applied for funding from the federal Small Business Technology Transfer program to accelerate its drug discovery applications.
Olga Kotelko’s brain “does not look like a 90-plus-year-old” — Beckman Institute director Art Kramer
Brain scans and cognitive tests of Olga Kotelko, a 93-year-old Canadian track-and-field athlete with more than 30 world records in her age group, may support the potential beneficial effects of exercise on cognition in the “oldest old.”
A retired teacher and mother of two, Kotelko started her athletic career late in life. She began with slow-pitch softball at age 65, and at 77 switched to track-and-field events, later enlisting the help of a coach. By the time of her death in 2014, she had won 750 gold medals in her age group in World Masters Athletics events, and had set new world records in the 100-meter, 200-meter, high jump, long jump, javelin, discus, shot put and hammer events.
Beckman Institute | Senior Olympian: 93-Year-Old Track Star Shows Physical & Mental Fitness
Lacking a peer group of reasonably healthy nonagenarians for comparison, the researchers decided to compare Kotelko with a group of 58 healthy, low-active women who were 60 to 78 years old.
“In our studies, we often collect data from adults who are between 60 and 80 years old, and we have trouble finding participants who are 75 to 80 and relatively healthy,” said U. of I. postdoctoral researcher Agnieszka Burzynska, who led the new analysis. As a result, very few studies have focused on the “oldest old,” she said.
“Although it is tough to generalize from a single study participant to other individuals, we felt very fortunate to have an opportunity to study the brain and cognition of such an exceptional individual,” said Beckman Institute director Art Kramer, an author of the new study.
Aging processes in the brain
The researchers wanted to determine whether Kotelko’s late-life athleticism had slowed — or perhaps even reversed — some of the processes of aging in her brain.
“In general, the brain shrinks with age,” Burzynska said. Fluid-filled spaces appear between the brain and the skull, and the ventricles enlarge, she said.
“The cortex, the outermost layer of cells where all of our thinking takes place, that also gets thinner,” she said. White matter tracts, which carry nerve signals between brain regions, tend to lose their structural and functional integrity over time. And the hippocampus, which is important to memory, usually shrinks with age, Burzynska said.
Previous studies have shown that regular aerobic exercise can enhance cognition and boost brain function in older adults, and can even increase the volume of specific brain regions like the hippocampus, Kramer said.
Surprising test results
In one long day at the lab, Kotelko submitted to an MRI brain scan, a cardiorespiratory fitness test on a treadmill, and cognitive tests. (All of the data are available at XNAT, a public repository; Kotelko and her daughter agreed to make her data public.) The women in the comparison group underwent the same tests and scans.
Afterwards, Kramer asked Olga if she was tired; she replied, “I rarely get tired.” “The decades-younger graduate students who tested her, however, looked exhausted.”
Kotelko’s brain offered some intriguing first clues about the potentially beneficial effects of her active lifestyle.
White-matter tracts remarkably intact. “Her brain did not seem to be, in general, very shrunken, and her ventricles did not seem to be enlarged,” Burzynska said. On the other hand, she had obvious signs of advanced aging in the white-matter tracts of some brain regions, Burzynska said.
“Olga had quite a lot white-matter hyperintensities, which are markers of unspecific white-matter damage,” she said. These are common in people over age 65, and tend to increase with age, she said.
As a whole, however, Kotelko’s white-matter tracts were remarkably intact — comparable to those of women decades younger, the researchers found. And the white-matter tracts in one region of her brain — the genu of the corpus callosum, which connects the right and left hemispheres at the very front of the brain — were in great shape, Burzynska said.
“Olga had the highest measure of white-matter integrity in that part of the brain, even higher than those younger females, which was very surprising,” she said. These white-matter tracts serve a region of the brain that is engaged in tasks known to decline fastest in aging, such as reasoning, planning and self-control, Burzynska said.
Better on cognitive test than other adults her own age. Kotelko performed worse on cognitive tests than the younger women, but better than other adults her own age who had been tested in an independent study. “She was quicker at responding to the cognitive tasks than other adults in their 90s,” Burzynska said. “And on memory, she was much better than they were.”
Hippocampus larger given her age. Her hippocampus was smaller than the younger participants, but larger than expected given her age, Burzynska said.
The new findings are only a very limited, first step toward calculating the effects of exercise on cognition in the oldest old, she said. “We have only one Olga and only at one time point, so it’s difficult to arrive at very solid conclusions,” Burzynska said.
“But I think it’s very exciting to see someone who is highly functioning at 93, possessing numerous world records in the athletic field and actually having very high integrity in a brain region that is very sensitive to aging. I hope it will encourage people that even as we age, our brains remain plastic. We have more and more evidence for that.”
The Robert Bosch Foundation and the National Institute on Aging at the National Institutes of Health supported this research, as did Abbott Nutrition, through the Center for Nutrition, Learning and Memory at the U. of I.
Kotelko biographer Bruce Grierson prompted researchers at the Beckman Institute to study Kotelko’s brain.
Abstract of White matter integrity, hippocampal volume, and cognitive performance of a world-famous nonagenarian track-and-field athlete
Physical activity (PA) and cardiorespiratory fitness (CRF) are associated with successful brain and cognitive aging. However, little is known about the effects of PA, CRF, and exercise on the brain in the oldest-old. Here we examined white matter (WM) integrity, measured as fractional anisotropy (FA) and WM hyperintensity (WMH) burden, and hippocampal (HIPP) volume of Olga Kotelko (1919–2014). Olga began training for competitions at age of 77 and as of June 2014 held over 30 world records in her age category in track-and-field. We found that Olga’s WMH burden was larger and the HIPP was smaller than in the reference sample (58 healthy low-active women 60–78 years old), and her FA was consistently lower in the regions overlapping with WMH. Olga’s FA in many normal-appearing WM regions, however, did not differ or was greater than in the reference sample. In particular, FA in her genu corpus callosum was higher than any FA value observed in the reference sample. We speculate that her relatively high FA may be related to both successful aging and the beneficial effects of exercise in old age. In addition, Olga had lower scores on memory, reasoning and speed tasks than the younger reference sample, but outperformed typical adults of age 90–95 on speed and memory. Together, our findings open the possibility of old-age benefits of increasing PA on WM microstructure and cognition despite age-related increase in WMH burden and HIPP shrinkage, and add to the still scarce neuroimaging data of the healthy oldest-old (>90 years) adults.
Research has moved online, with more than 80 percent of U.S. students using Wikipedia for research papers, but controversial science information has egregious errors, claim researchers (credit: Pixabay)
Wikipedia entries on politically controversial scientific topics can be unreliable due to “information sabotage,” according to an open-access paper published today in the journal PLOS One.
The authors (Gene E. Likens* and Adam M. Wilson*) analyzed Wikipedia edit histories for three politically controversial scientific topics (acid rain, evolution, and global warming), and four non-controversial scientific topics (the standard model in physics, heliocentrism, general relativity, and continental drift).
“Egregious errors and a distortion of consensus science”
Using nearly a decade of data, the authors teased out daily edit rates, the mean size of edits (words added, deleted, or edited), and the mean number of page views per day. Across the board, politically controversial scientific topics were edited more heavily and viewed more often.
“Wikipedia’s global warming entry sees 2–3 edits a day, with more than 100 words altered, while the standard model in physics has around 10 words changed every few weeks,” Wilson notes. “The high rate of change observed in politically controversial scientific topics makes it difficult for experts to monitor their accuracy and contribute time-consuming corrections.”
While the edit rate of the acid rain article was less than the edit rate of the evolution and global warming articles, it was significantly higher than the non-controversial topics. “In the scientific community, acid rain is not a controversial topic,” said professor Likens. “Its mechanics have been well understood for decades. Yet, despite having ‘semi-protected’ status to prevent anonymous changes, Wikipedia’s acid rain entry receives near-daily edits, some of which result in egregious errors and a distortion of consensus science.”
Wikipedia’s limitations
Likens adds, “As society turns to Wikipedia for answers, students, educators, and citizens should understand its limitations for researching scientific topics that are politically charged. On entries subject to edit-wars, like acid rain, evolution, and global change, one can obtain — within seconds — diametrically different information on the same topic.”
However, the authors note that as Wikipedia matures, there is evidence that the breadth of its scientific content is increasingly based on source material from established scientific journals. They also note that Wikipedia employs algorithms to help identify and correct blatantly malicious edits, such as profanity. But in their view, it remains to be seen how Wikipedia will manage the dynamic, changing content that typifies politically charged science topics.
To help readers critically evaluate Wikipedia content, Likens and Wilson suggest identifying entries that are known to have significant controversy or edit wars. They also recommend quantifying the reputation of individual editors. In the meantime, users are urged to cast a critical eye on Wikipedia source material, which is found at the bottom of each entry.
Wikipedia editors not impressed
In the Wikipedia “User_talk:Jimbo_Wales” page, several Wikipedia editors questioned the PLOS One authors’ statistical accuracy and conclusions, and noted that the data is three years out of date. “I don’t think this dataset can make any claim about controversial subjects at all,” one editor said. “It simply looks at too few articles, and there are too many explanations.”
“It has long been a source of bewilderment to me that we allow climate change denialists to run riot on Wikipedia,” said another.
Abstract of Content Volatility of Scientific Topics in Wikipedia: A Cautionary Tale
Wikipedia has quickly become one of the most frequently accessed encyclopedic references, despite the ease with which content can be changed and the potential for ‘edit wars’ surrounding controversial topics. Little is known about how this potential for controversy affects the accuracy and stability of information on scientific topics, especially those with associated political controversy. Here we present an analysis of the Wikipedia edit histories for seven scientific articles and show that topics we consider politically but not scientifically “controversial” (such as evolution and global warming) experience more frequent edits with more words changed per day than pages we consider “noncontroversial” (such as the standard model in physics or heliocentrism). For example, over the period we analyzed, the global warming page was edited on average (geometric mean ±SD) 1.9±2.7 times resulting in 110.9±10.3 words changed per day, while the standard model in physics was only edited 0.2±1.4 times resulting in 9.4±5.0 words changed per day. The high rate of change observed in these pages makes it difficult for experts to monitor accuracy and contribute time-consuming corrections, to the possible detriment of scientific accuracy. As our society turns to Wikipedia as a primary source of scientific information, it is vital we read it critically and with the understanding that the content is dynamic and vulnerable to vandalism and other shenanigans.