Sleep disruptions similar to jet lag linked to memory and learning problems

(credit: iStock)

Chemical changes in brain cells caused by disturbances in the body’s day-night cycle may lead to the learning and memory loss associated with Alzheimer’s disease, according to a University of California, Irvine (UCI) study.

People with Alzheimer’s often have problems with sleeping or may experience changes in their slumber schedule. Scientists do not completely understand why these disturbances occur.

“The issue is whether poor sleep accelerates the development of Alzheimer’s disease or vice versa,” said UCI biomedical engineering professor Gregory Brewer, affiliated with UCI’s Institute for Memory Impairments and Neurological Disorders. “It’s a chicken-or-egg dilemma, but our research points to disruption of sleep as the accelerator of memory loss.”

Inducing jet lag in mice causes low glutathione levels

To examine the link between learning and memory and circadian disturbances, his team altered normal light-dark patterns, with an eight-hour shortening of the dark period every three days for two groups of mice: young mouse models of Alzheimer’s disease (mice genetically modified to have AD symptoms) and normal mice.

The resulting jet lag greatly reduced activity in both sets of mice. The researchers found that in water maze tests, the AD mouse models had significant learning impairments that were absent in the AD mouse models not exposed to light-dark variations or in normal mice with jet lag. However, memory three days after training was impaired in both types of mice.

In follow-up tissue studies, they saw that jet lag caused a decrease in glutathione levels in the brain cells of all the mice. But these levels were much lower in the AD mouse models and corresponded to poor performance in the water maze tests. Glutathione is a major antioxidant that helps prevent damage to essential cellular components.

Glutathione deficiencies produce redox changes in brain cells. Redox reactions involve the transfer of electrons, which leads to alterations in the oxidation state of atoms and may affect brain metabolism and inflammation.

Brewer pointed to the accelerated oxidative stress as a vital component in Alzheimer’s-related learning and memory loss and noted that potential drug treatments could target these changes in redox reactions.

“This study suggests that clinicians and caregivers should add good sleep habits to regular exercise and a healthy diet to maximize good memory,” he said.

Study results appear online in the Journal of Alzheimer’s Disease.

AD has emerged as a global public health issue, currently estimated to affect 4.4% of persons 65 years old and 22% of those aged 90 and older, with an estimated 5.4 million Americans affected, according to the paper.


Abstract of Circadian Disruption Reveals a Correlation of an Oxidative GSH/GSSG Redox Shift with Learning and Impaired Memory in an Alzheimer’s Disease Mouse Model

It is unclear whether pre-symptomatic Alzheimer’s disease (AD) causes circadian disruption or whether circadian disruption accelerates AD pathogenesis. In order to examine the sensitivity of learning and memory to circadian disruption, we altered normal lighting phases by an 8 h shortening of the dark period every 3 days (jet lag) in the APPSwDI NOS2–/– model of AD (AD-Tg) at a young age (4-5 months), when memory is not yet affected compared to non-transgenic (non-Tg) mice. Analysis of activity in 12-12 h lighting or constant darkness showed only minor differences between AD-Tg and non-Tg mice. Jet lag greatly reduced activity in both genotypes during the normal dark time. Learning on the Morris water maze was significantly impaired only in the AD-Tg mice exposed to jet lag. However, memory 3 days after training was impaired in both genotypes. Jet lag caused a decrease of glutathione (GSH) levels that tended to be more pronounced in AD-Tg than in non-Tg brains and an associated increase in NADH levels in both genotypes. Lower brain GSH levels after jet lag correlated with poor performance on the maze. These data indicate that the combination of the environmental stress of circadian disruption together with latent stress of the mutant amyloid and NOS2 knockout contributes to cognitive deficits that correlate with lower GSH levels.

MOTOBOT: the first autonomous motorcycle-riding humanoid robot

MOTOBOT Ver. 1 (credit: Yamaha)

Yamaha introduced MOTOBOT Ver.1, the first autonomous motorcycle-riding humanoid robot, at the Tokyo Motor Show Wednesday (Oct. 28). A fusion of Yamaha’s motorcycle (an unmodified Yamaha YZF-R1M) and robotics technology, the future Motobot robot will ride an unmodified motorcycle on a racetrack at more than 200 km/h (124 mph), Yamaha says.

“We want to apply the fundamental technology and know-how gained in the process of this challenge to the creation of advanced rider safety and rider-support systems and put them to use in our current businesses, as well as using them to pioneer new lines of business,” says Yamaha in its press release.


Yamaha | New Yamaha MotoBot Concept Ver. 1

This robot will out-walk and out-run you one day

A walk in the park. Oregon State University engineers have successfully field-tested their walking robot, ATRIAS. (credit: Oregon State University)

Imagine robots that can walk and run like humans — or better than humans. Engineers at Oregon State University (OSU) and Technische Universitat Munchen may have achieved a major step in that direction with their “spring-mass” implementation of human and animal walking dynamics, allowing robots to maintain balance and efficiency of motion in difficult environments.

Studies done with OSU’s ATRIAS robot model, which incorporates the spring-mass theory, show that it’s three times more energy-efficient than any other human-sized bipedal robots.

“I’m confident that this is the future of legged robotic locomotion,” said Jonathan Hurst, an OSU professor of mechanical engineering and director of the Dynamic Robotics Laboratory in the OSU College of Engineering. “We’ve basically demonstrated the fundamental science of how humans walk,” he said.

When further refined and perfected, walking and running robots may work in the armed forces, as fire fighters, in factories or doing ordinary household chores, he said. “This could become as big as the automotive industry,” Hurst added.

Wearable robots and prostheses too

Aspects of the locomotion technology may also assist people with disabilities, said Daniel Renjewski with the Technische Universitat Munchen, the lead author on the study published in IEEE Transactions on Robotics. “Robots are already used for gait training, and we see the first commercial exoskeletons on the market,” he said. “This enables us to build an entirely new class of wearable robots and prostheses that could allow the user to regain a natural walking gait.”

Topology and key technical features of the ATRIAS robot. ATRIAS has six electric motors powered by a lithium polymer battery. It can take impacts and retain its balance and walk over rough and bumpy terrain. Power electronics, batteries, and control computer are located inside the trunk. (credit: Daniel Renjewski et al./IEEE Transactions on Robotics)

In continued research, work will be done to improve steering, efficiency, leg configuration, inertial actuation, robust operation, external sensing, transmissions and actuators, and other technologies.

The work has been supported by the National Science Foundation, the Defense Advanced Research Projects Agency, and the Human Frontier Science Program.


Oregon State University | ATRIAS Bipedal Robot: Takes a Walk in the Park


Abstract of Exciting Engineered Passive Dynamics in a Bipedal Robot

A common approach in designing legged robots is to build fully actuated machines and control the machine dynamics entirely in software, carefully avoiding impacts and expending a lot of energy. However, these machines are outperformed by their human and animal counterparts. Animals achieve their impressive agility, efficiency, and robustness through a close integration of passive dynamics, implemented through mechanical components, and neural control. Robots can benefit from this same integrated approach, but a strong theoretical framework is required to design the passive dynamics of a machine and exploit them for control. For this framework, we use a bipedal spring-mass model, which has been shown to approximate the dynamics of human locomotion. This paper reports the first implementation of spring-mass walking on a bipedal robot. We present the use of template dynamics as a control objective exploiting the engineered passive spring-mass dynamics of the ATRIAS robot. The results highlight the benefits of combining passive dynamics with dynamics-based control and open up a library of spring-mass model-based control strategies for dynamic gait control of robots.

What happens in the brain when we learn

Isolated cells in the visual cortex of a mouse (credit: Alfredo/Kirkwood (JHU))

A Johns Hopkins University-led research team has proven a working theory that explains what happens in the brain when we learn, as described in the current issue of the journal Neuron.

More than a century ago, Pavlov figured out that dogs fed after hearing a bell eventually began to salivate when they heard the bell ring. The team looked into the question of how Pavlov’s dogs (in “classical conditioning”) managed to associate an action with a delayed reward to create knowledge. For decades, scientists had a working theory of how it happened, but the team is now the first to prove it.

“If you’re trying to train a dog to sit, the initial neural stimuli, the command, is gone almost instantly — it lasts as long as the word sit,” said neuroscientist Alfredo Kirkwood, a professor with the university’s Zanvyl Krieger Mind/Brain Institute. “Before the reward comes, the dog’s brain has already turned to other things. The mystery was, ‘How does the brain link an action that’s over in a fraction of a second with a reward that doesn’t come until much later?’ ”

Eligibility traces

The working theory — which Kirkwood’s team has now validated experimentally — is that invisible “synaptic eligibility traces” effectively tag the synapses activated by the stimuli so that the learning can be cemented with the arrival of a reward. The reward is a neuromodulator* (neurochemical) that floods the dog’s brain with “good feelings.” Though the brain has long since processed the “sit” command, eligibility traces in the synapse respond to the neuromodulators, prompting a lasting synaptic change, a.k.a. “learning.”

The team was able to prove the eligibility-traces theory by isolating cells in the visual cortex of a mouse. When they stimulated the axon of one cell with an electrical impulse, they sparked a response in another cell. By doing this repeatedly, they mimicked the synaptic response between two cells as they process a stimulus and create an eligibility trace.

When the researchers later flooded the cells with neuromodulators, simulating the arrival of a delayed reward, the response between the cells strengthened (“long-term potentiation”) or weakened (“long-term depression”), showing that the cells had “learned” and were able to do so because of the eligibility trace.

“This is the basis of how we learn things through reward,” Kirkwood said, “a fundamental aspect of learning.”

In addition to a greater understanding of the mechanics of learning, these findings could enhance teaching methods and lead to treatments for cognitive problems, the researchers suggest.

Scientists at the University of Texas at Houston and the University of California, Davis were also involved in the research, which was supported by grants from JHU’s Science of Learning Institute and National Institutes of Health.

* The neuromodulators tested were norepinephrine, serotonin, dopamine, and acetylcholine, all of which have been implicated in cortical plasticity (ability to grow and form new connections to other neurons).


Abstract of Distinct Eligibility Traces for LTP and LTD in Cortical Synapses

In reward-based learning, synaptic modifications depend on a brief stimulus and a temporally delayed reward, which poses the question of how synaptic activity patterns associate with a delayed reward. A theoretical solution to this so-called distal reward problem has been the notion of activity-generated “synaptic eligibility traces,” silent and transient synaptic tags that can be converted into long-term changes in synaptic strength by reward-linked neuromodulators. Here we report the first experimental demonstration of eligibility traces in cortical synapses. We demonstrate the Hebbian induction of distinct traces for LTP and LTD and their subsequent timing-dependent transformation into lasting changes by specific monoaminergic receptors anchored to postsynaptic proteins. Notably, the temporal properties of these transient traces allow stable learning in a recurrent neural network that accurately predicts the timing of the reward, further validating the induction and transformation of eligibility traces for LTP and LTD as a plausible synaptic substrate for reward-based learning.

Controlling acoustic properties with algorithms and computational methods

A “zoolophone” with animal shapes automatically created using a computer algorithm. The tone of each key is comparable to those of professionally made instruments as a demonstration of an algorithm for computationally designing an object’s vibrational properties and sounds. (Changxi Zheng/Columbia Engineering)

Computer scientists at Columbia Engineering, Harvard, and MIT have demonstrated that acoustic properties — both sound and vibration — can be controlled by 3D-printing specific shapes.

They designed an optimization algorithm and used computational methods and digital fabrication to alter the shape of 2D and 3D objects, creating what looks to be a simple children’s musical instrument — a xylophone with keys in the shape of zoo animals.

Practical uses

“Our discovery could lead to a wealth of possibilities that go well beyond musical instruments,” says Changxi Zheng, assistant professor of computer science at Columbia Engineering, who led the research team.

“Our algorithm could lead to ways to build less noisy computer fans and bridges that don’t amplify vibrations under stress, and advance the construction of micro-electro-mechanical resonators whose vibration modes are of great importance.”

Zheng, who works in the area of dynamic, physics-based computational sound for immersive environments, wanted to see if he could use computation and digital fabrication to actively control the acoustical property, or vibration, of an object.

Zheng’s team decided to focus on simplifying the slow, complicated, manual process of designing “idiophones” — musical instruments that produce sounds through vibrations in the instrument itself, not through strings or reeds.

The surface vibration and resulting sounds depend on the idiophone’s shape in a complex way, so designing the shapes to obtain desired sound characteristics is not straightforward, and their forms have so far been limited to well-understood designs such as bars that are tuned by careful drilling of dimples on the underside of the instrument.

Optimizing sound properties

To demonstrate their new technique, the team settled on building a “zoolophone,” a metallophone with playful animal shapes (a metallophone is an idiophone made of tuned metal bars that can be struck to make sound, such as a glockenspiel).

Their algorithm optimized and 3D-printed the instrument’s keys in the shape of colorful lions, turtles, elephants, giraffes, and more, modelling the geometry to achieve the desired pitch and amplitude of each part.

“Our zoolophone’s keys are automatically tuned to play notes on a scale with overtones and frequency of a professionally produced xylophone,” says Zheng, whose team spent nearly two years on developing new computational methods while borrowing concepts from computer graphics, acoustic modeling, mechanical engineering, and 3D printing.

“By automatically optimizing the shape of 2D and 3D objects through deformation and perforation, we were able to produce such professional sounds that our technique will enable even novices to design metallophones with unique sound and appearance.”

3D metallophone cups automatically created by computers (credit: Changxi Zheng/Columbia Engineering)

The zoolophone represents fundamental research into understanding the complex relationships between an object’s geometry and its material properties, and the vibrations and sounds it produces when struck.

While previous algorithms attempted to optimize either amplitude (loudness) or frequency, the zoolophone required optimizing both simultaneously to fully control its acoustic properties. Creating realistic musical sounds required more work to add in overtones, secondary frequencies higher than the main one that contribute to the timbre associated with notes played on a professionally produced instrument.

Looking for the most optimal shape that produces the desired sound when struck proved to be the core computational difficulty: the search space for optimizing both amplitude and frequency is immense. To increase the chances of finding the most optimal shape, Zheng and his colleagues developed a new, fast stochastic optimization method, which they called Latin Complement Sampling (LCS).

They input shape and user-specified frequency and amplitude spectra (for instance, users can specify which shapes produce which note) and, from that information, optimized the shape of the objects through deformation and perforation to produce the wanted sounds. LCS outperformed all other alternative optimizations and can be used in a variety of other problems.

“Acoustic design of objects today remains slow and expensive,” Zheng notes. “We would like to explore computational design algorithms to improve the process for better controlling an object’s acoustic properties, whether to achieve desired sound spectra or to reduce undesired noise. This project underscores our first step toward this exciting direction in helping us design objects in a new way.”

Zheng, whose previous work in computer graphics includes synthesizing realistic sounds that are automatically synchronized to simulated motions, has already been contacted by researchers interested in applying his approach to micro-electro-mechanical systems (MEMS), in which vibrations filter RF signals.

Their work—“Computational Design of Metallophone Contact Sounds”—will be presented at SIGGRAPH Asia on November 4 in Kobe, Japan.

The work at Columbia Engineering was supported in part by the National Science Foundation (NSF) and Intel, at Harvard and MIT by NSF, Air Force Research Laboratory, and DARPA.

 

Longer-lasting, lighter lithium-ion batteries from silicon anodes

Schematic of electrode process design. (a) Components mixing under ultrasonic irradiation, (b) an optical image of the as-fabricated electrode made of silicon nanoparticles (SiNP), sulpher-doped graphene (SG), and polyacrylonitrile (PAN), (c) the electrode after sluggish heat treatment (SHT), (d) Schematic of the atomic-scale structure of the electrode. (credit: Fathy M. Hassan et al./Nature Communications)

Zhongwei Chen, a chemical engineering professor at the University of Waterloo, and a team of graduate students have created a new low-cost battery design using silicon instead of graphite, boosting the performance and life of lithium-ion batteries.

Waterloo’s silicon battery technology promises a 40 to 60 per cent increase in energy density (energy storage per unit volume), which is important for consumers with smartphones, smart homes, and smart wearables. It also means an electric car could be driven up to 500 kilometers (311 miles) between charges while reducing its overall weight.

The graphite bottleneck

The Waterloo engineers found that silicon anode materials are capable of producing batteries that store almost 10 times more energy than with graphite.

“As batteries improve, graphite is slowly becoming a performance bottleneck because of the limited amount of energy that it can store,” said Chen, the Canada Research Chair in Advanced Materials for Clean Energy and a member of the Waterloo Institute for Nanotechnology and the Waterloo Institute for Sustainable Energy.

The most critical challenge the Waterloo researchers faced in the new design was the loss of energy that occurs when silicon contracts and then expands by as much as 300 per cent with each charge cycle. The resulting increase and decrease in silicon volume forms cracks that reduce battery performance, create short circuits, and eventually cause the battery to stop operating.

To overcome this problem, Chen’s team along with the General Motors Global Research and Development Centre developed a flash heat treatment for fabricated silicon-based lithium-ion electrodes that minimizes volume expansion while boosting the performance and cycle capability of lithium-ion batteries.

“The economical flash heat treatment creates uniquely structured silicon anode materials that deliver extended cycle life to more than 2000 cycles with increased energy capacity of the battery,” said Chen.

Chen expects to see new batteries based on the design on the market next year.

Their findings are published in an open-access paper in the latest issue of Nature Communications.


Abstract of Evidence of covalent synergy in silicon–sulfur–graphene yielding highly efficient and long-life lithium-ion batteries

Silicon has the potential to revolutionize the energy storage capacities of lithium-ion batteries to meet the ever increasing power demands of next generation technologies. To avoid the operational stability problems of silicon-based anodes, we propose synergistic physicochemical alteration of electrode structures during their design. This capitalizes on covalent interaction of Si nanoparticles with sulfur-doped graphene and with cyclized polyacrylonitrile to provide a robust nanoarchitecture. This hierarchical structure stabilized the solid electrolyte interphase leading to superior reversible capacity of over 1,000 mAh g−1 for 2,275 cycles at 2 A g−1. Furthermore, the nanoarchitectured design lowered the contact of the electrolyte to the electrode leading to not only high coulombic efficiency of 99.9% but also maintaining high stability even with high electrode loading associated with 3.4 mAh cm−2. The excellent performance combined with the simplistic, scalable and non-hazardous approach render the process as a very promising candidate for Li-ion battery technology.

Holographic sonic tractor beam lifts and moves objects using soundwaves

Holograms (3-D light fields) can be projected from a 2-dimensional surface to control objects. (credit: Asier Marzo, Bruce Drinkwater and Sriram Subramanian)

British researchers have built a working Star-Trek-style “tractor beam” — a device that can attract or repel one object to another from a distance. It uses high-amplitude soundwaves to generate an acoustic hologram that can grasp and move small objects.

The technique, published in an open-access paper in Nature Communications October 27, has a wide range of potential applications, the researchers say. A sonic production line could transport delicate objects and assemble them, all without physical contact. Or a miniature version could grip and transport drug capsules or microsurgical instruments through living tissue.

The device was developed at the Universities of Sussex and Bristol in collaboration with Ultrahaptics.


University of Sussex | Levitation using sound waves

The researchers used an array of 64 miniature loudspeakers. The whole system consumes just 9 Watts of power, used to create high-pitched (40Khz), high-intensity sound waves to levitate a spherical bead 4mm in diameter made of expanded polystyrene.

The tractor beam works by surrounding the object with high-intensity sound to create a force field that keeps the objects in place. By carefully controlling the output of the loudspeakers, the object can be held in place, moved, or rotated.

Three different shapes of acoustic force fields work as tractor beams: an acoustic force field that resembles a pair of fingers or tweezers; an acoustic vortex, the objects becoming trapped at the core; and a high-intensity “cage” that surrounds the objects and holds them in place from all directions.

Previous attempts surrounded the object with loudspeakers, which limits the extent of movement and restricts many applications. Last year, the University of Dundee presented the concept of a tractor beam, but no objects were held in the ray.

The team is now designing different variations of this system. A bigger version aims at levitating a soccer ball from 10 meters away and a smaller version aims at manipulating particles inside the human body.


Asier Marzo, Matt Sutton, Bruce Drinkwater and Sriram Subramanian | Acoustic holograms are projected from a flat surface and contrary to traditional holograms, they exert considerable forces on the objects contained within. The acoustic holograms can be updated in real time to translate, rotate and combine levitated particles enabling unprecedented contactless manipulators such as tractor beams.


Abstract of Holographic acoustic elements for manipulation of levitated objects

Sound can levitate objects of different sizes and materials through air, water and tissue. This allows us to manipulate cells, liquids, compounds or living things without touching or contaminating them. However, acoustic levitation has required the targets to be enclosed with acoustic elements or had limited maneuverability. Here we optimize the phases used to drive an ultrasonic phased array and show that acoustic levitation can be employed to translate, rotate and manipulate particles using even a single-sided emitter. Furthermore, we introduce the holographic acoustic elements framework that permits the rapid generation of traps and provides a bridge between optical and acoustical trapping. Acoustic structures shaped as tweezers, twisters or bottles emerge as the optimum mechanisms for tractor beams or containerless transportation. Single-beam levitation could manipulate particles inside our body for applications in targeted drug delivery or acoustically controlled micro-machines that do not interfere with magnetic resonance imaging.

Up to 27 seconds of inattention after talking to your car or smartphone

This graphic shows the mental distraction scores of three smartphone personal assistants and 10 in-vehicle infotainment systems for using voice commands in cars to call contacts, dial phone numbers or change music. The smartphone assistants’ scores were 0.3 points higher than shown if a driver also sent text messages using them. (credit: AAA Foundation for Traffic Safety)

If you think it is okay to talk to your car infotainment system or smartphone while driving or even when stopped at a red light, think again. It takes up to 27 seconds to regain full attention after issuing voice commands, University of Utah researchers found in two new studies for the AAA Foundation for Traffic Safety.

One of the studies showed that it is highly distracting to use hands-free voice commands to dial phone numbers, call contacts, change music, and send texts with Microsoft Cortana, Apple Siri and Google Now smartphone personal assistants.

Mazda 2015 steering wheel and dashboard. Phone calls can be dialed or received via Bluetooth on the steering wheel and the display has multiple screens for phone directory, radio, Sirius XM, and GPS. (credit: Landmark MAZDA)

The other study examined voice-dialing, voice-contact calling, and music selection using in-vehicle information or “infotainment” systems in 10 model-year 2015 vehicles. Three were rated as moderately distracting, six as highly distracting and the system in the 2015 Mazda 6 as very highly distracting.

The research also found that, contrary to what some may believe, practice with voice-recognition systems doesn’t eliminate distraction. The studies also showed older drivers — those most likely to buy autos with infotainment systems — are much more distracted than younger drivers when giving voice commands.

But the most surprising finding was that a driver traveling only 25 mph continues to be distracted for up to 27 seconds after disconnecting from highly distracting phone and car voice-command systems, and up to 15 seconds after disconnecting from the moderately distracting systems. The 27 seconds means a driver traveling 25 mph would cover the length of three football fields before regaining full attention.

“Many of these systems have been put into cars with a voice-recognition system to control entertainment: Facebook, Twitter, Instagram, Snapchat, Facetime, etc. We now are trying to entertain the driver rather than keep the driver’s attention on the road.”

The new AAA reports urge that voice activated, in-vehicle information systems “ought not to be used indiscriminately” while driving, and advise that “caution is warranted” in smart-phone use while driving.

The studies are fifth and sixth since 2013 by University of Utah psychologists and funded by the AAA Foundation for Traffic Safety. AAA formerly was known as the American Automobile Association.  Strayer and Cooper ran the studies with Utah psychology doctoral students Joanna Turrill, James Coleman and Rachel Hopman.

The ratings: In-car systems and smartphone assistants are distracting

The previous Utah-AAA studies devised a five-point scale: 1 mild distraction, 2 moderate distraction, 3 high distraction, 4 very high distraction and 5 maximum distraction. Those studies showed cellphone calls were moderately distracting, with scores of 2.5 for hand-held calls and 2.3 for hands-free calls. Listening to a book on tape rated mild distraction at 1.7. Listening to the radio rated 1.2.

One of the new studies found mild distraction for in-vehicle information systems in the Chevy Equinox with MyLink (2.4), Buick Lacrosse with IntelliLink (2.4) and Toyota 4Runner with Entune (2.9).

High distraction systems were the Ford Taurus with Sync MyFord Touch (3.1), Chevy Malibu with MyLink (3.4), Volkswagen Passat with Car-Net (3.5), Nissan Altima with Nissan Connect (3.7), Chrysler 200c with Uconnect (3.8) and Hyundai Sonata with Blue Link (3.8). The Mazda 6’s Connect system rated very highly distracting (4.6).

In some cases, the same voice-command system (like Chevy MyLink) got different distraction scores in different models – something the researchers speculate is due to varying amounts of road noise and use of different in-vehicle microphones.

The second new study found all three major smartphone personal assistants either highly or very highly distracting. Two scores were given to each voice-based system: A lower number for using voice commands only to make calls or change music when driving — the same tasks done with the in-car systems — and a higher number that also included using smartphones to send texts by voice commands.

In 2013, 3,154 people died and 424,000 others were injured in motor vehicle crashes on U.S. roads involving driver distraction, says the U.S. Department of Transportation.

The study reports, which are listed below, are open-access.

How to fall gracefully if you’re a robot


Georgia Tech | Algorithm allows robot to fall gracefully

Researchers at Georgia Tech are teaching robots how to fall with grace and without serious damage.

This is becoming important as costly robots become more common in manufacturing, healthcare, and domestic tasks.

Ph.D. graduate Sehoon Ha and Professor Karen Liu developed a new algorithm that tells a robot how to react to a wide variety of falls, from a single step to recover from a gentle nudge to a rolling motion that breaks a high-speed fall. The idea is learn the best sequence of movements to slow their momentum and minimize the damage or injury they might cause to themselves or others while falling.

“Our work unified existing research about how to teach robots to fall by giving them a tool to automatically determine the total number of contacts (how many hands shoved it, for example), the order of contacts, and the position and timing of those contacts,” said Ha, now a postdoctoral associate at Disney Research Pittsburgh. “All of that impacts the potential of a fall and changes the robot’s response.”

The algorithm was validated in physics simulation and experimentally tested on a BioloidGP humanoid.

With the latest finding, Ha builds upon Liu’s previous research that studied how cats modify their bodies in the midst of a fall. Liu knew from that work that one of the most important factors in a fall is the angle of the landing.

“From previous work, we knew a robot had the computational know-how to achieve a softer landing, but it didn’t have the hardware to move quickly enough like a cat,” Liu said. “Our new planning algorithm takes into account the hardware constraints and the capabilities of the robot, and suggests a sequence of contacts so the robot gradually can slow itself down.”

They suggest robots may soon fall more gracefully than people — and possibly even cats.


DARPA TV | A Celebration of Risk (a.k.a., Robots Take a Spill)


Abstract of Multiple Contact Planning for Minimizing Damage of Humanoid Falls

This paper introduces a new planning algorithm to minimize the damage of humanoid falls by utilizing multiple contact points. Given an unstable initial state of the robot, our approach plans for the optimal sequence of contact points such that the initial momentum is dissipated with minimal impacts on the robot. Instead of switching among a collection of individual control strategies, we propose a general algorithm which plans for appropriate responses to a wide variety of falls, from a single step to recover a gentle nudge, to a rolling motion to break a high-speed fall. Our algorithm transforms the falling problem into a sequence of inverted pendulum problems and use dynamic programming to solve the optimization efficiently. The planning algorithm is validated in physics simulation and experimentally tested on a BioloidGP humanoid.

A drug-delivery technique to bypass the blood-brain barrier

Drugs used to treat a variety of central nervous system diseases may be administered through the nose and diffused through an implanted mucosal graft (left, in red) to gain access to the brain. Under normal circumstances, there are multiple layers within the nose that block the access of pharmaceutical agents from getting to the brain, including bone and the dura/arachnoid membrane, which represents part of the blood-brain barrier (top right). After endoscopic skull base surgery (bottom right), all of these layers are removed and replaced with a nasal mucosal graft, which is 1,000 times more porous than the native blood-brain barrier. So these grafts may be used to deliver very large drugs, including proteins, which would otherwise be blocked by the blood-brain barrier. (credit: Garyfallia Pagonis and Benjamin S. Bleier, M.D.)

Researchers at Massachusetts Eye and Ear/Harvard Medical School and Boston University have developed a new technique to deliver drugs across the blood-brain barrier and have successfully tested it in a Parkinson’s mouse model (a line of mice that has been genetically modified to express the symptoms and pathological features of Parkinson’s to various extents).

Their findings, published in the journal Neurosurgery, lend hope to patients with neurological conditions that are difficult to treat due to a barrier mechanism that prevents approximately 98 percent of drugs from reaching the brain and central nervous system.

“Although we are currently looking at neurodegenerative disease, there is potential for the technology to be expanded to psychiatric diseases, chronic pain, seizure disorders, and many other conditions affecting the brain and nervous system down the road,” said senior author Benjamin S. Bleier, M.D., of the department of otolaryngology at Mass. Eye and Ear/Harvard Medical School.

The nasal mucosal grafting solution

Researchers delivered glial derived neurotrophic factor (GDNF), a therapeutic protein in testing for treating Parkinson’s disease, to the brains of mice. They showed that their delivery method was equivalent to direct injection of GDNF, which has been shown to delay and even reverse disease progression of Parkinson’s disease in pre-clinical models.

Once they have finished the treatment, they use adjacent nasal lining to rebuild the hole in a permanent and safe way. Nasal mucosal grafting is a technique regularly used in the ENT (ear, nose, and throat) field to reconstruct the barrier around the brain after surgery to the skull base. ENT surgeons commonly use endoscopic approaches to remove brain tumors through the nose by making a window through the blood-brain barrier to access the brain.

The safety and efficacy of these methods have been well established through long-term clinical outcomes studies in the field, with the nasal lining protecting the brain from infection just as the blood brain barrier has done.

By functionally replacing a section of the blood-brain barrier with nasal mucosa, which is more than 1,000 times more permeable than the native barrier, surgeons could create a “screen door” to allow for drug delivery to the brain and central nervous system.

The technique has the potential to benefit a large population of patients with neurodegenerative disorders, where there is still a specific unmet need for blood-brain-penetrating therapeutic delivery strategies.

The study was funded by The Michael J. Fox Foundation for Parkinson’s Research (MJFF).


Abstract of Heterotopic Mucosal Grafting Enables the Delivery of Therapeutic Neuropeptides Across the Blood Brain Barrier

BACKGROUND: The blood-brain barrier represents a fundamental limitation in treating neurological disease because it prevents all neuropeptides from reaching the central nervous system (CNS). Currently, there is no efficient method to permanently bypass the blood-brain barrier.

OBJECTIVE: To test the feasibility of using nasal mucosal graft reconstruction of arachnoid defects to deliver glial-derived neurotrophic factor (GDNF) for the treatment of Parkinson disease in a mouse model.

METHODS: The Institutional Animal Care and Use Committee approved this study in an established murine 6-hydroxydopamine Parkinson disease model. A parietal craniotomy and arachnoid defect was repaired with a heterotopic donor mucosal graft. The therapeutic efficacy of GDNF (2 [mu]g/mL) delivered through the mucosal graft was compared with direct intrastriatal GDNF injection (2 [mu]g/mL) and saline control through the use of 2 behavioral assays (rotarod and apomorphine rotation). An immunohistological analysis was further used to compare the relative preservation of substantia nigra cell bodies between treatment groups.

RESULTS: Transmucosal GDNF was equivalent to direct intrastriatal injection at preserving motor function at week 7 in both the rotarod and apomorphine rotation behavioral assays. Similarly, both transmucosal and intrastriatal GDNF demonstrated an equivalent ratio of preserved substantia nigra cell bodies (0.79 +/- 0.14 and 0.78 +/- 0.09, respectively, P = NS) compared with the contralateral control side, and both were significantly greater than saline control (0.53 +/- 0.21; P = .01 and P = .03, respectively).

CONCLUSION: Transmucosal delivery of GDNF is equivalent to direct intrastriatal injection at ameliorating the behavioral and immunohistological features of Parkinson disease in a murine model. Mucosal grafting of arachnoid defects is a technique commonly used for endoscopic skull base reconstruction and may represent a novel method to permanently bypass the blood-brain barrier.