Graphene sheets allow for very-low-cost diagnostic devices

A new, very-low-cost diagnostic method. Mild heating of graphene oxide sheets makes it possible to bond particular compounds (blue, orange, purple) to the sheets’ surface, a new study shows. These compounds in turn select and bond with specific molecules of interest, including DNA and proteins, or even whole cells. In this image, the treated graphene oxide on the right has oxygen molecules (red) clustered together, making it nearly twice as efficient at capturing cells (green) as the material on the left. (credit:  the researchers)

A new method developed at MIT and National Chiao Tung University, based on specially treated sheets of graphene oxide, could make it possible to capture and analyze individual cells from a small sample of blood. It could potentially lead to very-low-cost diagnostic devices (less than $5 a piece) that are mass-producible and could be used almost anywhere for point-of-care testing, especially in resource-constrained settings.

A single cell can contain a wealth of information about the health of an individual. The new system could ultimately lead to a variety of simple devices that could perform a variety of sensitive diagnostic tests, even in places far from typical medical facilities, for cancer screening or treatment follow-up, for example.

How to capture DNA, proteins, or even whole cells for analysis

The material (graphene oxide, or GO) used in this research is an oxidized version of the two-dimensional form of pure carbon known as graphene. The key to the new process is heating the graphene oxide at relatively mild temperatures.

This low-temperature annealing, as it is known, makes it possible to bond particular compounds to the material’s surface that can be used to capture molecules of diagnostic interest.

Schematic showing oxygen clustering, resulting in improved ability to recognize foreign molecules (credit: Neelkanth M. Bardhan et al./ACS Nano)

The heating process changes the material’s surface properties, causing oxygen atoms to cluster together, leaving spaces of bare graphene between them. This leaves room to attach other chemicals to the surface, which can be used to select and bond with specific molecules of interest, including DNA and proteins, or even whole cells. Once captured, those molecules or cells can then be subjected to a variety of tests.*

Nanobodies

The new research demonstrates how that basic process could potentially enable a suite of low-cost diagnostic systems.

For this proof-of-concept test, the team used molecules that can quickly and efficiently capture specific immune cells that are markers for certain cancers. They were able to demonstrate that their treated graphene oxide surfaces were almost twice as effective at capturing such cells from whole blood, compared to devices fabricated using ordinary, untreated graphene oxide.

They did this by enzymatically coating the treated graphene oxide surface with peptides called “nanobodies” — subunits of antibodies, which can be cheaply and easily produced in large quantities in bioreactors and are highly selective for particular biomolecules.**

The new process allows for rapid capture and assessment of cells or biomolecules within about 10 minutes and without the need for refrigeration of samples or incubators for precise temperature control. And the whole system is compatible with existing large-scale manufacturing methods.

The researchers believe many different tests could be incorporated on a single device, all of which could be placed on a small glass slide like those used for microscopy. The basic processing method could also make possible a wide variety of other applications, including solar cells and light-emitting devices.

The findings are reported in the journal ACS Nano. Authors include Angela Belcher, the James Mason Crafts Professor in biological engineering and materials science and engineering at MIT and a member of the Koch Institute for Integrative Cancer Research; Jeffrey Grossman, the Morton and Claire Goulder and Family Professor in Environmental Systems at MIT; Hidde L. Ploegh, a professor of biology and member of the Whitehead Institute for Biomedical Research; Guan-Yu Chen, an assistant professor in biomedical engineering at National Chiao Tung University in Taiwan; and Zeyang Li, a doctoral student at the Whitehead Institute.

“Efficiency is especially important if you’re trying to detect a rare event,” Belcher says. “The goal of this was to show a high efficiency of capture.” The next step after this basic proof of concept, she says, is to try to make a working detector for a specific disease model.

The work was supported by the Army Research Office Institute for Collaborative Biotechnologies and MIT’s Tata Center and Solar Frontiers Center.

* Other researchers have been trying to develop diagnostic systems using a graphene oxide substrate to capture specific cells or molecules, but these approaches used just the raw, untreated material. Despite a decade of research, other attempts to improve such devices’ efficiency have relied on external modifications, such as surface patterning through lithographic fabrication techniques, or adding microfluidic channels, which add to the cost and complexity. Those methods for treating graphene oxide for this purpose require high-temperature treatments or the use of harsh chemicals; the new system, which the group has patented, requires no chemical pretreatment and an annealing temperature of just 50 to 80 degrees Celsius (122 to 176 F).

** The researchers found that increasing the annealing time steadily increased the efficiency of cell capture: After nine days of annealing, the efficiency of capturing cells from whole blood went from 54 percent, for untreated graphene oxide, to 92 percent for the treated material. The team then performed molecular dynamics simulations to understand the fundamental changes in the reactivity of the graphene oxide base material. The simulation results, which the team also verified experimentally, suggested that upon annealing, the relative fraction of one type of oxygen (carbonyl) increases at the expense of the other types of oxygen functional groups (epoxy and hydroxyl) as a result of the oxygen clustering. This change makes the material more reactive, which explains the higher density of cell capture agents and increased efficiency of cell capture.


Abstract of Enhanced Cell Capture on Functionalized Graphene Oxide Nanosheets through Oxygen Clustering

With the global rise in incidence of cancer and infectious diseases, there is a need for the development of techniques to diagnose, treat, and monitor these conditions. The ability to efficiently capture and isolate cells and other biomolecules from peripheral whole blood for downstream analyses is a necessary requirement. Graphene oxide (GO) is an attractive template nanomaterial for such biosensing applications. Favorable properties include its two-dimensional architecture and wide range of functionalization chemistries, offering significant potential to tailor affinity toward aromatic functional groups expressed in biomolecules of interest. However, a limitation of current techniques is that as-synthesized GO nanosheets are used directly in sensing applications, and the benefits of their structural modification on the device performance have remained unexplored. Here, we report a microfluidic-free, sensitive, planar device on treated GO substrates to enable quick and efficient capture of Class-II MHC-positive cells from murine whole blood. We achieve this by using a mild thermal annealing treatment on the GO substrates, which drives a phase transformation through oxygen clustering. Using a combination of experimental observations and MD simulations, we demonstrate that this process leads to improved reactivity and density of functionalization of cell capture agents, resulting in an enhanced cell capture efficiency of 92 ± 7% at room temperature, almost double the efficiency afforded by devices made using as-synthesized GO (54 ± 3%). Our work highlights a scalable, cost-effective, general approach to improve the functionalization of GO, which creates diverse opportunities for various next-generation device applications.

Deep-brain imaging using minimally invasive surgical needle and laser light

An image of cells taken inside a mouse brain using a new minimally invasive, inexpensive method to take high-resolution brain pictures (credit: Rajesh Menon)

Using just a simple inexpensive micro-thin glass surgical needle and laser light, University of Utah engineers have developed an inexpensive way to take high-resolution pictures of a mouse brain, minimizing tissue damage — a process they believe could lead to a much less invasive method for humans.

Typically, researchers must either surgically take a sample of the animal’s brain to examine the cells under a microscope or use an endoscope, which can be 10 to 100 times thicker than a needle.

Schematic of new “computational-cannula microscopy” process. Light from a yellow-green laser (right) is focused by lenses and shines into the brain (experimental sample brain tissue is shown here), causing the cells captured by the cannula (needle) to glow. That glow is magnified and then recorded by a standard CCD camera. The recorded light is then run through a sophisticated computer algorithm that re-assembles the scattered light waves into a 2D or potentially, even a 3D picture. (credit: Ganghun Kim et al./Scientific Reports)

With the new “computational-cannula microscopy” process, the small (220 micrometers) width of the cannula allows for minimally invasive imaging, while the long length (>2 mm*) allows for deep-brain imaging of features of about 3.5 micrometers in size. Since no (slow) scanning is involved, video at the native frame rate of the camera can be achieved, allowing for capturing near-real-time live videos (currently, it takes less than one fifth of a second to compute each frame on a desktop computer).

In the case of mice, researchers use optogenetics (genetically modify the animals so that only the cells they want to see glow under this laser light), but Utah electrical and computer engineering associate professor Rajesh Menon, who led the research, believes the new process can potentially be developed for human patients. That would create a simpler, less invasive, and less expensive method than endoscopes, and it could be used for other organs.

Menon and his team have been working with the U. of U.’s renowned Nobel-winning researcher, Distinguished Professor of Biology and Human Genetics Mario Capecchi, and Jason Shepherd, assistant professor of neurobiology and anatomy.

The research is documented in the latest issue of open-access Scientific Reports.

* “With three-photon microscopy, penetration depth of up to 1.2 mm was recently reported. However, three- or multi-photon excitation is extremely inefficient due to the low absorption cross-section, which requires large excitation intensities leading to potential for photo-toxicity. Furthermore, many interesting biological features lie at depths greater than 1.2 mm from the surface of the brain such as the basal ganglia, hippocampus, and the hypothalamus.” — Ganghun Kim et al./Scientific Reports


Abstract of Deep-brain imaging via epi-fluorescence Computational Cannula Microscopy

Here we demonstrate widefield (field diameter = 200 μm) fluorescence microscopy and video imaging inside the rodent brain at a depth of 2 mm using a simple surgical glass needle (cannula) of diameter 0.22 mm as the primary optical element. The cannula guides excitation light into the brain and the fluorescence signal out of the brain. Concomitant image-processing algorithms are utilized to convert the spatially scrambled images into fluorescent images and video. The small size of the cannula enables minimally invasive imaging, while the long length (>2 mm) allow for deep-brain imaging with no additional complexity in the optical system. Since no scanning is involved, widefield fluorescence video at the native frame rate of the camera can be achieved.

This low-power chip could make speech recognition practical for tiny devices

MIT researchers have built a low-power chip specialized for automatic speech recognition. A cellphone running speech-recognition software might require about 1 watt of power; the new chip requires 100 times less power (between 0.2 and 10 milliwatts, depending on the number of words it has to recognize).

That could translate to a power savings of 90 to 99 percent, making voice control practical for wearables (especially watches, earbuds, and glasses, where speech recognition is essential) and other simple electronic devices, including ones that have to harvest energy from their environments or go months between battery charges, used in the “internet of things”  (IoT), says Anantha Chandrakasan, the Vannevar Bush Professor of Electrical Engineering and Computer Science at MIT, whose group developed the new chip.

A voice-recognition network is too big to fit in a chip’s onboard memory, which is a problem because going off-chip for data is much more energy intensive than retrieving it from local stores. So the MIT researchers’ design concentrates on minimizing the amount of data that the chip has to retrieve from off-chip memory.

The new MIT speech-recognition chip uses an SRAM memory chip (instead of MLC flash, which requires higher energy-per-bit); deep neural networks (a first in a standalone hardware speech recognizer) that are optimized for power consumption as low as 3.3 mW by using limited network widths and quantized, sparse weight matrices; and other methods. The chip supports vocabularies up to 145,000 words, run in real-time. A simple “voice activity detection” circuit monitors ambient noise to determine whether it might be speech, and not just higher energy. (credit: Michael Price et al./MIT)

The new chip was presented last week at the International Solid-State Circuits Conference.

The research was funded through the Qmulus Project, a joint venture between MIT and Quanta Computer, and the chip was prototyped through the Taiwan Semiconductor Manufacturing Company’s University Shuttle Program.


Abstract of A Scalable Speech Recognizer with Deep-Neural-Network Acoustic Models and Voice-Activated Power Gating

The applications of speech interfaces, commonly used for search and personal assistants, are diversifying to include wearables, appliances, and robots. Hardware-accelerated automatic speech recognition (ASR) is needed for scenarios that are constrained by power, system complexity, or latency. Furthermore, a wakeup mechanism, such as voice activity detection (VAD), is needed to power gate the ASR and downstream system. This paper describes IC designs for ASR and VAD that improve on the accuracy, programmability, and scalability of previous work.

Space X plans global space internet

(credit: SpaceX)

SpaceX has applied to the FCC to launch 11,943 satellites into low-Earth orbit, providing “ubiquitous high-bandwidth (up to 1Gbps per user, once fully deployed) broadband services for consumers and businesses in the U.S. and globally,” according to FCC applications.

Recent meetings with the FCC suggest that the plan now looks like “an increasingly feasible reality — particularly with 5G technologies just a few years away, promising new devices and new demand for data,” Verge reports.

Such a service will be particularly useful to rural areas, which have limited access to internet bandwidth.

Low-Earth orbit (at up to 2,000 kilometers, or 1,200 mi) ensures lower latency (communication delay between Earth and satellite) — making the service usable for voice communications via Skype, for example — compared to geosynchronous orbit (at 35,786 kilometers, or 22,000 miles), offered by Dish Network and other satellite ISP services.* The downside: it takes a lot more satellites to provide the coverage.

Boeing, Softbank-backed OneWeb (which hopes to “connect every school to the Internet by 2022″), Telesat, and others** have proposed similar services, possibly bringing the total number of satellites to about 20,000 in low and mid earth orbits in the 2020s, estimates Next Big Future.

* “SpaceX expects its latencies between 25 and 35ms, similar to the latencies measured for wired Internet services. Current satellite ISPs have latencies of 600ms or more, according to FCC measurements, notes Ars Technica.

** Audacy, Karousel, Kepler Communications, LeoSat, O3b, Space Norway,Theia Holdings, and ViaSat, according to Space News. The ITU [international counterpart of the FCC] has set rules preventing new constellations to interfere with established ground and satellite systems operating in the same frequencies. OneWeb, for example, has said it will basically switch off power as its satellites cross the equator so as not to disturb transmissions from geostationary-orbit satellites directly above and using Ku-band frequencies.

 

Whole-body vibration may be as effective as regular exercise

Hate treadmills? No prob. The Tranquility Pod uses “pleasant sound, gentle vibration, and soothing light to transport the body, mind, and spirit to a tranquil state of relaxation” — and maybe lose weight (and $30,000). (credit: Hammacher Schlemmer)

If you’re overweight and find it challenging to exercise regularly, now there’s good news: A less strenuous form of exercise known as whole-body vibration (WBV) can mimic the muscle and bone health benefits of regular exercise — at least in mice — according to a new study published in the Endocrine Society’s journal Endocrinology.

Lack of exercise is contributing to the obesity and diabetes epidemics, according to the researchers. These disorders can also increase the risk of bone fractures. Physical activity can help to decrease this risk and reduce the negative metabolic effects of these conditions.

But the alternative, WBV, can be experienced while sitting, standing, or even lying down on a machine with a vibrating platform. When the machine vibrates, it transmits energy to your body, and your muscles contract and relax multiple times during each second.

“Our study is the first to show that whole-body vibration may be just as effective as exercise at combating some of the negative consequences of obesity and diabetes,” said the study’s first author, Meghan E. McGee-Lawrence, Ph.D., of Augusta University in Georgia. “While WBV did not fully address the defects in bone mass of the obese mice in our study, it did increase global bone formation, suggesting longer-term treatments could hold promise for preventing bone loss as well.”

Just as effective as a treadmill

Glucose and insulin tolerance testing revealed that the genetically obese and diabetic mice showed similar metabolic benefits from both WBV and exercising on a treadmill. Obese mice gained less weight after exercise or WBV than obese mice in the sedentary group, although they remained heavier than normal mice. Exercise and WBV also enhanced muscle mass and insulin sensitivity in the genetically obese mice.

The findings suggest that WBV may be a useful supplemental therapy to combat metabolic dysfunction in individuals with morbid obesity. “These results are encouraging,” McGee-Lawrence said. “However, because our study was conducted in mice, this idea needs to be rigorously tested in humans to see if the results would be applicable to people.”

The authors included researchers at the National Institute of Health’s National Institute of Aging (NIA). Funding was provided by the American Diabetes Association, the National Institutes of Health’s National Institute of Diabetes and Digestive Kidney Diseases, and the National Institute on Aging.

Know a cheaper alternative to the Tranquility Pod? Sound off below!

* To conduct the study, researchers examined two groups of 5-week-old male mice. One group consisted of normal mice, while the other group was genetically unresponsive to the hormone leptin, which promotes feelings of fullness after eating. Mice from each group were assigned to sedentary, WBV or treadmill exercise conditions.

After a week-long period to grow used to the exercise equipment, the groups of mice began a 12-week exercise program. The mice in the WBV group underwent 20 minutes of WBV at a frequency of 32 Hz with 0.5g acceleration each day. Mice in the treadmill group walked for 45 minutes daily at a slight incline. For comparison, the third group did not exercise. Mice were weighed weekly during the study.


Abstract of Whole-body vibration mimics the metabolic effects of exercise in male leptin receptor deficient mice

Whole-body vibration has gained attention as a potential exercise mimetic, but direct comparisons with the metabolic effects of exercise are scarce. To determine whether whole-body vibration recapitulates the metabolic and osteogenic effects of physical activity, we exposed male wildtype (Wt) and leptin receptor deficient (db/db) mice to daily treadmill exercise or whole-body vibration for three months. Body weights were analyzed and compared with Wt and db/db mice that remained sedentary. Glucose and insulin tolerance testing revealed comparable attenuation of hyperglycemia and insulin resistance in db/db mice following treadmill exercise or whole-body vibration. Both interventions reduced body weight in db/db mice and normalized muscle fiber diameter. Treadmill exercise and whole-body vibration also attenuated adipocyte hypertrophy in visceral adipose tissue and reduced hepatic lipid content in db/db mice. Although the effects of leptin receptor deficiency on cortical bone structure were not eliminated by either intervention, exercise and whole-body vibration increased circulating levels of osteocalcin in db/db mice. In the context of increased serum osteocalcin, the modest effects of TE and WBV on bone geometry, mineralization, and biomechanics may reflect subtle increases in osteoblast activity in multiple areas of the skeleton. Taken together, these observations indicate that whole-body vibration recapitulates the effects of exercise on metabolism in type 2 diabetes.

Transcranial alternating current stimulation used to boost working memory

The fMRI scans show that stimulation “in beat” increases brain activity in the regions involved in task performance. On the other hand, stimulation “out of beat” showed activity in regions usually associated with resting. (credit: Ines Violante)

In a study published Tuesday Mar. 14 in the open-access journal eLife, researchers at Imperial College London found that applying transcranial alternating current stimulation (TACS) through the scalp helped to synchronize brain waves in different areas of the brain, enabling subjects to perform better on tasks involving short-term working memory.

The hope is that the approach could one day be used to bypass damaged areas of the brain and relay signals in people with traumatic brain injury, stroke, or epilepsy.

“What we observed is that people performed better when the two waves had the same rhythm and at the same time,” said Ines Ribeiro Violante, PhD, a neuroscientist in the Department of Medicine at Imperial, who led the research. The current gave a performance boost to the memory processes used when people try to remember names at a party, telephone numbers, or even a short grocery list.

Keeping the beat

Violante and team targeted two brain regions — the middle frontal gyrus and the inferior parietal lobule — which are known to be involved in working memory.

Ten volunteers were asked to carry out a set of memory tasks of increasing difficulty while receiving theta frequency stimulation to the two brain regions at slightly different times (unsynchronised), at the same time (synchronous), or only a quick burst (sham) to give the impression of receiving full treatment.

In the working memory experiments, participants looked at a screen on which numbers flashed up and had to remember if a number was the same as the previous, or in the case of the harder trial, if the current number matched that of two-numbers previous.

Results showed that when the brain regions were stimulated in sync, reaction times on the memory tasks improved, especially on the harder of the tasks requiring volunteers to hold two strings of numbers in their minds.

“The classic behavior is to do slower on the harder cognitive task, but people performed faster with synchronized stimulation and as fast as on the simpler task,” said Violante.

Previous studies have shown that brain stimulation with electromagnetic waves or electrical current can have an effect on brain activity, the field has remained controversial due to a lack of reproducibility. But using functional MRI to image the brain enabled the team to show changes in activity occurring during stimulation.

“The results show that when the stimulation was in sync, there was an increase in activity in those regions involved in the task. When it was out of sync the opposite effect was seen,” Violante explained.

Clinical use

“The next step is to see if the brain stimulation works in patients with brain injury, in combination with brain imaging, where patients have lesions which impair long range communication in their brains,” said Violante. “The hope is that it could eventually be used for these patients, or even those who have suffered a stroke or who have epilepsy.

The researchers also plan to combine brain stimulation with cognitive training to see if it restores lost skills.

The research was funded by the Wellcome Trust.


Abstract of Externally induced frontoparietal synchronization modulates network dynamics and enhances working memory performance

Cognitive functions such as working memory (WM) are emergent properties of large-scale network interactions. Synchronisation of oscillatory activity might contribute to WM by enabling the coordination of long-range processes. However, causal evidence for the way oscillatory activity shapes network dynamics and behavior in humans is limited. Here we applied transcranial alternating current stimulation (tACS) to exogenously modulate oscillatory activity in a right frontoparietal network that supports WM. Externally induced synchronization improved performance when cognitive demands were high. Simultaneously collected fMRI data reveals tACS effects dependent on the relative phase of the stimulation and the internal cognitive processing state. Specifically, synchronous tACS during the verbal WM task increased parietal activity, which correlated with behavioral performance. Furthermore, functional connectivity results indicate that the relative phase of frontoparietal stimulation influences information flow within the WM network. Overall, our findings demonstrate a link between behavioral performance in a demanding WM task and large-scale brain synchronization.

First nanoengineered retinal implant could help the blind regain functional vision

Activated by incident light, photosensitive silicon nanowires 1 micrometer in diameter stimulate residual undamaged retinal cells to induce visual sensations. (credit (image adapted): Sohmyung Ha et al./ J. Neural Eng)

A team of engineers at the University of California San Diego and La Jolla-based startup Nanovision Biosciences Inc. have developed the first nanoengineered retinal prosthesis — a step closer to restoring the ability of neurons in the retina to respond to light.

The technology could help tens of millions of people worldwide suffering from neurodegenerative diseases that affect eyesight, including macular degeneration, retinitis pigmentosa, and loss of vision due to diabetes.

Despite advances in the development of retinal prostheses over the past two decades, the performance of devices currently on the market to help the blind regain functional vision is still severely limited — well under the acuity threshold of 20/200 that defines legal blindness.

The new prosthesis relies on two new technologies: implanted arrays of photosensitive nanowires and a wireless power/data system.

Implanted arrays of silicon nanowires

The new prosthesis uses arrays of nanowires that simultaneously sense light and electrically stimulate the retina. The nanowires provide higher resolution than anything achieved by other devices — closer to the dense spacing of photoreceptors in the human retina, according to the researchers.*

Comparison of retina and electrode geometries between an existing retinal prosthesis and new nanoengineered prosthesis design. (left) Planar platinum electrodes (gray) of the FDA-approved Argus II retinal prosthesis (a 60-element array with 200 micrometer electrode diameter). (center) Retinal photoreceptor cells: rods (yellow) and cones (green). (right) Fabricated silicon nanowires (1 micrometer in diameter) at the same spatial magnification as photoreceptor cells. (credit: Science Photo Library and Sohmyung Ha et al./ J. Neural Eng.)

Existing retinal prostheses require a vision sensor (such as a camera) outside of the eye to capture a visual scene and then transform it into signals to sequentially stimulate retinal neurons (in a matrix). Instead, the silicon nanowires mimic the retina’s light-sensing cones and rods to directly stimulate retinal cells. The nanowires are bundled into a grid of electrodes, directly activated by light.

This direct, local translation of incident light into electrical stimulation makes for a much simpler — and scalable — architecture for a prosthesis, according to the researchers.

Wireless power and telemetry system

For the new device, power is delivered wirelessly, from outside the body to the implant, through an inductive powering telemetry system. Data to the nanowires is sent over the same wireless link at record speed and energy efficiency. The telemetry system is capable of transmitting both power and data over a single pair of inductive coils, one emitting from outside the body, and another on the receiving side in the eye.**

Three of the researchers have co-founded La Jolla-based Nanovision Biosciences, a partner in this study, to further develop and translate the technology into clinical use, with the goal of restoring functional vision in patients with severe retinal degeneration. Animal tests with the device are in progress, with clinical trials following.***

The research was described in a recent issue of the Journal of Neural Engineering. It was funded by Nanovision Biosciences, Qualcomm Inc., and the Institute of Engineering in Medicine and the Clinical and Translational Research Institute at UC San Diego.

* For visual acuity of 20/20,  an electrode pixel size of 5 μm (micrometers) is required; 20/200 visual acuity requires 50 μm. The minimum number of electrodes required for pattern recognition or reading text is estimated to be about 600. The new nanoengineered silicon nanowire electrodes are 1 μm in diameter, and for the experiment, 2500 silicon nanowires were used.

** The device is highly energy efficient because it minimizes energy losses in wireless power and data transmission and in the stimulation process, recycling electrostatic energy circulating within the inductive resonant tank, and between capacitance on the electrodes and the resonant tank. Up to 90 percent of the energy transmitted is actually delivered and used for stimulation, which means less RF wireless power emitting radiation in the transmission, and less heating of the surrounding tissue from dissipated power.

These are primary cortical neurons cultured on the surface of an array of optoelectronic nanowires. Here a neuron is pulling the nanowires, indicating the the cell is doing well on this material. (credit: UC San Diego)

*** For proof-of-concept, the researchers inserted the wirelessly powered nanowire array beneath a transgenic rat retina with rhodopsin P23H knock-in retinal degeneration. The degenerated retina interfaced in vitro with a microelectrode array for recording extracellular neural action potentials (electrical “spikes” from neural activity).


Abstract of Towards high-resolution retinal prostheses with direct optical addressing and inductive telemetry

Objective. Despite considerable advances in retinal prostheses over the last two decades, the resolution of restored vision has remained severely limited, well below the 20/200 acuity threshold of blindness. Towards drastic improvements in spatial resolution, we present a scalable architecture for retinal prostheses in which each stimulation electrode is directly activated by incident light and powered by a common voltage pulse transferred over a single wireless inductive link. Approach. The hybrid optical addressability and electronic powering scheme provides separate spatial and temporal control over stimulation, and further provides optoelectronic gain for substantially lower light intensity thresholds than other optically addressed retinal prostheses using passive microphotodiode arrays. The architecture permits the use of high-density electrode arrays with ultra-high photosensitive silicon nanowires, obviating the need for excessive wiring and high-throughput data telemetry. Instead, the single inductive link drives the entire array of electrodes through two wires and provides external control over waveform parameters for common voltage stimulation. Main results. A complete system comprising inductive telemetry link, stimulation pulse demodulator, charge-balancing series capacitor, and nanowire-based electrode device is integrated and validated ex vivo on rat retina tissue. Significance. Measurements demonstrate control over retinal neural activity both by light and electrical bias, validating the feasibility of the proposed architecture and its system components as an important first step towards a high-resolution optically addressed retinal prosthesis.

First nanoengineered retinal implant could help the blind regain functional vision

Activated by incident light, photosensitive silicon nanowires 1 micrometer in diameter stimulate residual undamaged retinal cells to induce visual sensations. (credit (image adapted): Sohmyung Ha et al./ J. Neural Eng)

A team of engineers at the University of California San Diego and La Jolla-based startup Nanovision Biosciences Inc. have developed the first nanoengineered retinal prosthesis — a step closer to restoring the ability of neurons in the retina to respond to light.

The technology could help tens of millions of people worldwide suffering from neurodegenerative diseases that affect eyesight, including macular degeneration, retinitis pigmentosa, and loss of vision due to diabetes.

Despite advances in the development of retinal prostheses over the past two decades, the performance of devices currently on the market to help the blind regain functional vision is still severely limited — well under the acuity threshold of 20/200 that defines legal blindness.

The new prosthesis relies on two new technologies: implanted arrays of photosensitive nanowires and a wireless power/data system.

Implanted arrays of silicon nanowires

The new prosthesis uses arrays of nanowires that simultaneously sense light and electrically stimulate the retina. The nanowires provide higher resolution than anything achieved by other devices — closer to the dense spacing of photoreceptors in the human retina, according to the researchers.*

Comparison of retina and electrode geometries between an existing retinal prosthesis and new nanoengineered prosthesis design. (left) Planar platinum electrodes (gray) of the FDA-approved Argus II retinal prosthesis (a 60-element array with 200 micrometer electrode diameter). (center) Retinal photoreceptor cells: rods (yellow) and cones (green). (right) Fabricated silicon nanowires (1 micrometer in diameter) at the same spatial magnification as photoreceptor cells. (credit: Science Photo Library and Sohmyung Ha et al./ J. Neural Eng.)

Existing retinal prostheses require a vision sensor (such as a camera) outside of the eye to capture a visual scene and then transform it into signals to sequentially stimulate retinal neurons (in a matrix). Instead, the silicon nanowires mimic the retina’s light-sensing cones and rods to directly stimulate retinal cells. The nanowires are bundled into a grid of electrodes, directly activated by light.

This direct, local translation of incident light into electrical stimulation makes for a much simpler — and scalable — architecture for a prosthesis, according to the researchers.

Wireless power and telemetry system

For the new device, power is delivered wirelessly, from outside the body to the implant, through an inductive powering telemetry system. Data to the nanowires is sent over the same wireless link at record speed and energy efficiency. The telemetry system is capable of transmitting both power and data over a single pair of inductive coils, one emitting from outside the body, and another on the receiving side in the eye.**

Three of the researchers have co-founded La Jolla-based Nanovision Biosciences, a partner in this study, to further develop and translate the technology into clinical use, with the goal of restoring functional vision in patients with severe retinal degeneration. Animal tests with the device are in progress, with clinical trials following.***

The research was described in a recent issue of the Journal of Neural Engineering. It was funded by Nanovision Biosciences, Qualcomm Inc., and the Institute of Engineering in Medicine and the Clinical and Translational Research Institute at UC San Diego.

* For visual acuity of 20/20,  an electrode pixel size of 5 μm (micrometers) is required; 20/200 visual acuity requires 50 μm. The minimum number of electrodes required for pattern recognition or reading text is estimated to be about 600. The new nanoengineered silicon nanowire electrodes are 1 μm in diameter, and for the experiment, 2500 silicon nanowires were used.

** The device is highly energy efficient because it minimizes energy losses in wireless power and data transmission and in the stimulation process, recycling electrostatic energy circulating within the inductive resonant tank, and between capacitance on the electrodes and the resonant tank. Up to 90 percent of the energy transmitted is actually delivered and used for stimulation, which means less RF wireless power emitting radiation in the transmission, and less heating of the surrounding tissue from dissipated power.

These are primary cortical neurons cultured on the surface of an array of optoelectronic nanowires. Here a neuron is pulling the nanowires, indicating the the cell is doing well on this material. (credit: UC San Diego)

*** For proof-of-concept, the researchers inserted the wirelessly powered nanowire array beneath a transgenic rat retina with rhodopsin P23H knock-in retinal degeneration. The degenerated retina interfaced in vitro with a microelectrode array for recording extracellular neural action potentials (electrical “spikes” from neural activity).


Abstract of Towards high-resolution retinal prostheses with direct optical addressing and inductive telemetry

Objective. Despite considerable advances in retinal prostheses over the last two decades, the resolution of restored vision has remained severely limited, well below the 20/200 acuity threshold of blindness. Towards drastic improvements in spatial resolution, we present a scalable architecture for retinal prostheses in which each stimulation electrode is directly activated by incident light and powered by a common voltage pulse transferred over a single wireless inductive link. Approach. The hybrid optical addressability and electronic powering scheme provides separate spatial and temporal control over stimulation, and further provides optoelectronic gain for substantially lower light intensity thresholds than other optically addressed retinal prostheses using passive microphotodiode arrays. The architecture permits the use of high-density electrode arrays with ultra-high photosensitive silicon nanowires, obviating the need for excessive wiring and high-throughput data telemetry. Instead, the single inductive link drives the entire array of electrodes through two wires and provides external control over waveform parameters for common voltage stimulation. Main results. A complete system comprising inductive telemetry link, stimulation pulse demodulator, charge-balancing series capacitor, and nanowire-based electrode device is integrated and validated ex vivo on rat retina tissue. Significance. Measurements demonstrate control over retinal neural activity both by light and electrical bias, validating the feasibility of the proposed architecture and its system components as an important first step towards a high-resolution optically addressed retinal prosthesis.

Reboot of The Matrix in the works

(credit: Warner Bros.)

Warner Bros. is in the early stages of developing a relaunch of The Matrix, The Hollywood Reporter revealed today (March 14, Pi day, appropriately).

The Matrix, the iconic 1999 sci-fi movie, “is considered one of the most original films in cinematic history,” says THR.

The film “depicts a dystopian future in which reality as perceived by most humans is actually a simulated reality called ‘the Matrix,’ created by sentient machines to subdue the human population, while their bodies’ heat and electrical activity are used as an energy source,” Wikipedia notes. “Computer programmer ‘Neo’ learns this truth and is drawn into a rebellion against the machines, which involves other people who have been freed from the ‘dream world.’”

Keanu Reeves said he would be open to returning for another installment of the franchise if the Wachowskis were involved, according to THR (they are not currently involved).

Interestingly, Carrie-Anne Moss, who played Trinity in the film series, now stars in HUMANS as a scientist developing the technology to upload a person’s consciousness into a synth body.

Resisting microcracks from metal fatigue

Researchers have developed a type of steel with three characteristics that help it resist microcracks that lead to fatigue failure: a layered nanostructure, a mixture of microstructural phases with different degrees of hardness, and a metastable composition. They compared samples of metal with just one or two of these key attributes (top left, top right, and bottom left) and with all three (bottom right). The metal alloy with all three attributes outperformed all the others in crack resistance. (credit: Courtesy of the researchers)

A team of researchers at MIT and in Japan and Germany has found a way to greatly reduce the effects of metal fatigue by incorporating a laminated nanostructure into the steel. The layered structuring gives the steel a kind of bone-like resilience, allowing it to deform without allowing the spread of microcracks that can lead to fatigue failure.

Metal fatigue can lead to abrupt and sometimes catastrophic failures in parts that undergo repeated loading, or stress. It’s a major cause of failure in structural components of everything from aircraft and spacecraft to bridges and power plants. As a result, such structures are typically built with wide safety margins that add to costs.

The findings are described in a paper in the journal Science by C. Cem Tasan, the Thomas B. King Career Development Professor of Metallurgy at MIT; Meimei Wang, a postdoc in his group; and six others at Kyushu University in Japan and the Max Planck Institute in Germany.

“Loads on structural components tend to be cyclic,” Tasan says. For example, an airplane goes through repeated pressurization changes during every flight, and components of many devices repeatedly expand and contract due to heating and cooling cycles. While such effects typically are far below the kinds of loads that would cause metals to change shape permanently or fail immediately, they can cause the formation of microcracks, which over repeated cycles of stress spread a bit further and wider, ultimately creating enough of a weak area that the whole piece can fracture suddenly.

Nature-inspired

Tasan and his team were inspired by the way nature addresses the same kind of problem, making bones lightweight but very resistant to crack propagation. A major factor in bone’s fracture resistance is its hierarchical mechanical structure, with different patterns of voids and connections at many different length scales, and a lattice-like internal structure that combines strength with light weight.

So the team investigated microstructures that would mimic this in a metal alloy, developing a kind of steel that has three key characteristics, which combine to limit the spread of cracks that do form:

  • A layered structure that tends to keep cracks from spreading beyond the layers where they start.
  • Microstructural phases with different degrees of hardness, which complement each other, so when a crack starts to form, “every time it wants to propagate further, it needs to follow an energy-intensive path,” and the result is a great reduction in such spreading.
  • A metastable composition — tiny areas within it are poised between different stable states, some more flexible than others, and their phase transitions can help absorb the energy of spreading cracks and even lead the cracks to close back up.

Next step, Tasan says, is to scale up the material to quantities that could be commercialized, and define which applications would benefit most.

The research was supported by the European Research Council and MIT’s Department of Materials Science and Engineering.


Abstract of Bone-like crack resistance in hierarchical metastable nanolaminate steels

Fatigue failures create enormous risks for all engineered structures, as well as for human lives, motivating large safety factors in design and, thus, inefficient use of resources. Inspired by the excellent fracture toughness of bone, we explored the fatigue resistance in metastability-assisted multiphase steels. We show here that when steel microstructures are hierarchical and laminated, similar to the substructure of bone, superior crack resistance can be realized. Our results reveal that tuning the interface structure, distribution, and phase stability to simultaneously activate multiple micromechanisms that resist crack propagation is key for the observed leap in mechanical response. The exceptional properties enabled by this strategy provide guidance for all fatigue-resistant alloy design efforts.