Ingestible capsule uses light-emitting bacteria to monitor gastrointestinal health

MIT-designed biosensor capsule uses genetically engineered light-emitting bacteria (right) to detect molecules that identify bleeding or other gastrointestinal problems. Ultra-low-power electronics (left) sense the light and send diagnostic information wirelessly to a cellphone. (credit: Lillie Paquette/MIT)

MIT engineers have designed and built a tiny ingestible biosensor* capsule that can diagnose gastrointestinal problems, and the engineers demonstrated its ability to detect bleeding in pigs.

Currently, if patients are suspected to be bleeding from a gastric ulcer, for example, they have to undergo an endoscopy to diagnose the problem, which often requires the patient to be sedated.

If the engineers can shrink the sensor capsule and detect a variety of other conditions, the research could potentially transform the diagnosis of gastrointestinal diseases and conditions, according to the researchers.

Diagnosing gastrointestinal diseases in real time

To detect diseases or conditions, the genetically engineered bacteria (green) are placed into multiple wells (blue), covered by a semipermeable membrane (white) that allows small molecules (red) from the surrounding environment to diffuse through. The bacteria luminesce (glow) when they sense the specific type of molecule they are designed for. (In the experiment with pigs, heme — part of the red hemoglobin blood pigment — indicated bleeding.) A phototransistor (brown) measures the amount of light produced by the bacterial cells and relays that information to a microprocessor in the capsule, which then sends a wireless signal to a nearby computer or smartphone. (credit: Mark Mimee et al./Science)

The researchers showed that the ingestible biosensor could correctly determine whether any blood was present in the pig’s stomach. They anticipate that this type of sensor could be deployed for either one-time use or to remain in the digestive tract for several days or weeks, sending continuous signals. The sensors could also be designed to carry multiple strains of bacteria, allowing for diagnosing multiple diseases and conditions.

The researchers plan to reduce the size of the sensor capsule (currently 10 millimeters wide by 30 millimeters long) and to study how long the bacteria cells can survive in the digestive tract. They also hope to develop sensors for gastrointestinal conditions other than bleeding.**

Reference: Science. Source: MIT.

* The sensor requires only 13 microwatts of power. The researchers equipped the sensor with a 2.7-volt battery, which they estimate could power the device for about 1.5 months of continuous use. They say it could also be powered by a voltaic cell sustained by acidic fluids in the stomach, using previously developed MIT technology.

** For example, one of the sensors they designed detects a sulfur-containing ion called thiosulfate, which is linked to inflammation and could be used to monitor patients with Crohn’s disease or other inflammatory conditions. Another one detects a bacterial signaling molecule called AHL, which can serve as a marker for gastrointestinal infections because different types of bacteria produce slightly different versions of the molecule.

 

 

High-quality carbon nanotubes made from carbon dioxide in the air break the manufacturing cost barrier

Carbon dioxide converted to small-diameter carbon nanotubes grown on a stainless steel surface. (credit: Pint Lab/Vanderbilt University)

Vanderbilt University researchers have discovered a technique to cost-effectively convert carbon dioxide from the air into a type of carbon nanotubes that they say is “more valuable than any other material ever made.”

Carbon nanotubes are super-materials that can be stronger than steel and more conductive than copper. So despite much research, why aren’t they used in applications ranging from batteries to tires?

Answer: The high manufacturing costs and extremely expensive price, according to the researchers.*

The price ranges from $100–200 per kilogram for the “economy class” carbon nanotubes with larger diameters and poorer properties, up to $100,000 per kilogram and above for the “first class” carbon nanotubes — ones with a single wall, the smallest diameters**, and the most amazing properties, Cary Pint, PhD, an assistant professor in the Mechanical Engineering department at Vanderbilt University, explained to KurzweilAI.

A new process for making cost-effective carbon nanotubes

The researchers have demonstrated a new process for creating carbon-nanotube-based material, using carbon dioxide as a feedstock input source.

  • They achieved the smallest-diameter and most valuable CNTs ever reported in the literature for this approach.
  • They used sustainable electrochemical synthesis.***
  • A spinoff, SkyNano LLC, is now doing this with far less cost and energy input than conventional methods for making these materials. “That means as market prices start to change, our technology will survive and the more expensive technologies will get shaken out of the market,” said Pint. “We’re aggressively working toward scaling this process up in a big way.”
  • There are implications for reducing carbon dioxide in the atmosphere.****

“One of the most exciting things about what we’ve done is use electrochemistry to pull apart carbon dioxide into elemental constituents of carbon and oxygen and stitch together, with nanometer precision, those carbon atoms into new forms of matter,” said Pint. “That opens the door to being able to generate really valuable products with carbon nanotubes.” These materials, which Pint calls “black gold,” could steer the conversation from the negative impact of emissions to how we can use them in future technology.

“These could revolutionize the world,” he said.

Reference: ACS Appl. Mater. Interfaces May 1, 2018. Source: Vanderbilt University

* This BCC Research market report has a detailed discussion on carbon nanotube costsGlobal Markets and Technologies for Carbon Nanotubes. Also see Energy requirements,an open-access supplement to the ACS paper.

** “Small-diameter” in this study refers to about 10 nanometers or less. Small-diameter carbon nanotubes include few-walled (about 310 walls), double-walled, and single walled carbon nanotubes. These all have higher economic value because of their enhanced physical properties, broader appeal toward applications, and greater difficulty in synthesis compared to their larger-diameter counterparts. “Larger diameter” carbon nanotubes refer to those with outer diameter generally less than 50 nanometers, since after reaching this diameter, these materials lose the value that the properties in small diameter carbon nanotubes enable for applications.

*** The researchers used mechanisms for controlling electrochemical synthesis of CNTs from the capture and conversion of ambient CO2 in molten salts. Iron catalyst layers are deposited at different thicknesses onto stainless steel to produce cathodes, and atomic layer deposition of Al2O3 (aluminum oxide) is performed on nickel to produce a corrosion-resistant anode. The research team showed that a process called “Ostwald ripening” — where the nanoparticles that grow the carbon nanotubes change in size to larger diameters — is a key contender against producing the infinitely more useful size. The team showed they could partially overcome this by tuning electrochemical parameters to minimize these pesky large nanoparticles.

**** “According to the EPA, the United States alone emits more than 6,000 million metric tons of carbon dioxide into the atmosphere every year.  Besides being implicated as a contributor to global climate change, these emissions are currently wasted resources that could otherwise be used productively to make useful materials. At SkyNano, we focus on the electrochemical conversion of carbon dioxide into all carbon-based nanomaterials which can be used for a variety of applications. Our technology overcomes cost limitations associated with traditional carbon nanomaterial production and utilizes carbon dioxide as the only direct chemical feedstock.” — SkyNano Technologies

Self-healing material mimics the resilience of soft biological tissue

A self-healing material that spontaneously repairs itself in real time from extreme mechanical damage, such as holes cut in it multiple times. New pathways are formed instantly and autonomously to keep this circuit functioning and the device moving. (credit: Carnegie Mellon University College of Engineering)

Carnegie Mellon University (CMU) researchers have created a self-healing material that spontaneously repairs itself under extreme mechanical damage, similar to many natural organisms. Applications include bio-inspired first-responder robots that instantly heal themselves when damaged and wearable computing devices that recover from being dropped.

The new material is composed of liquid metal droplets suspended in a soft elastomer (a material with elastic properties, such as rubber). When damaged, the droplets rupture to form new connections with neighboring droplets, instantly rerouting electrical signals. Circuits produced with conductive traces of this material remain fully and continuously operational when severed, punctured, or have material removed.

“Other research in soft electronics has resulted in materials that are elastic, but are still vulnerable to mechanical damage that causes immediate electrical failure,” said Carmel Majidi, PhD, a CMU associate professor of mechanical engineering, who also directs the Integrated Soft Materials Laboratory. “The unprecedented level of functionality of our self-healing material can enable soft-matter electronics and machines to exhibit the extraordinary resilience of soft biological tissue and organisms.”

The self-healing material also exhibits high ability to conduct electricity, which is not affected when stretched. That makes it ideal for uses in power and data transmission, as a health-monitoring device on an athlete during rigorous training, or an inflatable structure that can withstand environmental extremes on Mars, for example.

Reference: Nature Materials. Source: Carnegie Mellon University.

Revolutionary 3D nanohybrid lithium-ion battery could allow for charging in just seconds [UPDATED]

Left: Conventional composite battery design, with 2D stacked anode and cathode (black and red materials). Right: New 3D nanohybrid lithium-ion battery design, with multiple anodes and cathodes nanometers apart for high-speed charging. (credit: Cornell University)

Cornell University engineers have designed a revolutionary 3D lithium-ion battery that could be charged in just seconds.

In a conventional battery, the battery’s anode and cathode* (the two sides of a battery connection) are stacked in separate columns (the black and red columns in the left illustration above). For the new design, the engineers instead used thousands of nanoscale (ultra-tiny) anodes and cathodes (shown in the illustration on the right above).

Putting those thousands of anodes and cathodes just 20 nanometers (billionths of a meter) apart dramatically extends the area, allowing for extremely fast charging** (in seconds or less) and also allows for holding more power for longer.

Left-to-right: The anode was made of self-assembling (automatically grown) thin-film carbon material with thousands of regularly spaced pores (openings), each about 40 nanometers wide. The pores were coated with a 10 nanometer-thick electrolyte* material (the blue layer between the black anode layer, as shown in the “Electrolyte coating” illustration), which is electronically insulating but conducts ions (an ion is an atom or molecule that has an electrical charge and is what flows inside a battery instead of electrons). The cathode was made from sulfur. (credit: Cornell University)

In addition, unlike traditional batteries, the electrolyte battery material does not have pinholes (tiny holes), which can lead to short-circuiting the battery, giving rise to fires in mobile devices, such as cellphones and laptops.

The engineers are still perfecting the technique, but they have applied for patent protection on the proof-of-concept work, which was funded by the U.S. Department of Energy and in part by the National Science Foundation.

Reference: Energy & Environmental Science (open source with registration) March 9, 2018. Source: Cornell University May 16, 2018.

* How batteries work

Batteries have three parts. An anode (-) and a cathode (+) — the positive and negative sides at either end of a traditional battery — which are hooked up to an electrical circuit (green); and the electrolyte, which keeps the anode and cathode apart and allows ions (electrically charged atoms or molecules) to flow. (credit: Northwestern University Qualitative Reasoning Group)

 

 

 

 

 

 

 

 

 

 

 

 

 

** Also described as high “power density.” In addition, “Batteries with nanostructured architectures promise improved power output, as close proximity of the two electrodes is beneficial for fast ion diffusion, while high material loading simultaneously enables high energy density” (hold more power for longer). — J. G. Werner et al./Energy Environ. Sci.

UPDATED May 21, 2018 to include explanations of technical terms

MIT’s modular plug-and-play blocks allow for building medical diagnostic devices

Tiny 1/2-inch, low-cost “Ampli blocks” can be assembled to create diagnostic devices. The blocks, which simply consist of a tiny sheet of paper or glass fiber sandwiched between a plastic or metal block and a glass cover, snap together to form a complete diagnostic procedure. Some of the blocks contain channels for samples to flow straight through, some have turns, and some can receive a sample from a pipette, or mix multiple reagents (chemicals) together. The blocks are color-coded by function, making it easy to assemble pre-designed devices (the researchers plan to put instructions online). (credit: MIT Little Devices Lab)

Researchers at MIT’s Little Devices Lab have developed a set of modular “plug-and-play” blocks that can be put together in different ways to produce medical diagnostic devices for detecting cancer and infectious diseases such as Zika virus.

The “Ampli blocks” require little expertise to assemble, and can test blood glucose levels in diabetic patients or detect viral infection, for example. They are inexpensive (about 6 U.S. cents for four blocks) and no refrigeration is required, making them particularly important for small, low-resources laboratories in the developing world. Small labs can now create their own libraries of plug-and-play diagnostics to independently treat their own local patient populations.

Customized diagnostics on a modular “biochemical breadboard”

KurzweilAI has previously reported on small, portable diagnostic devices based on chemical reactions that occur on paper strips. But these devices haven’t been widely deployed. That’s mainly because they’re not designed for mass-producing a diagnostic test — especially when a disease doesn’t affect a large number of people, the researchers say.

The Little Devices Lab has created a kit of DIY modular components that can be easily put together to generate exactly what a lab needs. So far, the MIT lab has created about 40 different building blocks that lab workers around the world could easily assemble on their own. This approach is similar to assembling radios and other electronic devices from commercially available electronic “breadboards” in the 1970s, or more recently, creating your own homemade computer gadgets with an Arduino breadboard or a Raspberry Pi computer.

Jose Gomez-Marquez, co-director of MIT’s Little Devices Lab, holds a sheet of tiny diagnostic papers, which can be easily printed and attached to blocks to form diagnostic devices. (credit:Melanie Gonick/MIT)

The reusable blocks can also perform different biochemical functions. Many contain antibodies that can detect a specific molecule in a blood or urine sample. The antibodies are attached to nanoparticles that change color when the target molecule is present, indicating a positive result.

These blocks can be aligned in different ways, allowing the user to create diagnostics based on one reaction or a series of reactions. In one example, the researchers combined blocks that detect three different molecules to create a test for isonicotinic acid, which can reveal whether tuberculosis patients are taking their medication.

The researchers also showed that these blocks can outperform previous versions of paper diagnostic devices in some cases. For example, they found that they could run a sample back and forth over a test strip multiple times, enhancing the signal. This could make it easier to get reliable results from urine and saliva samples, which are usually more dilute than blood samples, but are easier to obtain from patients.

Getting the technology into the hands of small labs and non-experts

The MIT team is now working on tests for human papilloma virus, malaria, and Lyme disease, among others. They are also working on blocks that can synthesize useful compounds, including drugs, and even on blocks that incorporate electrical components such as LEDs.

The ultimate goal is to get the technology into the hands of small labs (and non-experts) in both industrialized and developing countries, so they can create their own diagnostics. The MIT team has already sent Ampli-block kits to labs in Chile and Nicaragua, where they have been used to develop devices to monitor patient adherence to TB treatment and to test for a genetic variant that makes malaria more difficult to treat.

The MIT researchers are now investigating large-scale manufacturing techniques. They hope to launch a company to manufacture and distribute the kits around the world. They’re also opening up the platform to other researchers.

Reference: Advanced Healthcare Materials May 16, 2018. Source: MIT May 16, 2018.

Brain-computer-interface training helps tetraplegics win avatar race

Pilot and avatar at Cybathlon (credit: Cybathlon)

Noninvasive brain–computer interface (BCI) systems can restore functions lost to disability — allowing for spontaneous, direct brain control of external devices without the risks associated with surgical implantation of neural interfaces. But as machine-learning algorithms have become faster and more powerful, researchers have mostly focused on increasing performance by optimizing pattern-recognition algorithms.

But what about letting patients actively participate with AI in improving performance?

To test that idea, researchers at the École Polytechnique Fédérale de Lausanne (EPFL), based in Geneva, Switzerland, conducted research using “mutual learning” between computer and humans — two severely impaired (tetraplegic) participants with chronic spinal cord injury. The goal: win a live virtual racing game at an international event.

Controlling a racing-game avatar using a BCI

A computer graphical user interface for the race track in Cybathlon 2016 “Brain Runners“ game. “Pilots” (participants) had to deliver (by thinking) the proper command in each color pad (cyan, magenta, yellow) to accelerate their own avatar in the race. (credit: Serafeim Perdikis and Robert Leeb)

The participants were trained to improve control of an avatar (a person-substitute shown on a computer screen) in a virtual racing game. The experiment used a brain-computer interface (BCI), which uses electrodes on the head to pick up control signals from a person’s brain.

Each participant (called a “pilot”) controlled an on-screen avatar in a three-part race. This required mastery of separate commands for spinning, jumping, sliding, and walking without stumbling.

After training for several months, in Oct. 8, 2016, the two pilots participated (on the “Brain Tweakers” team) in Cybathlon in Zurich, Switzerland — the first international para-Olympics for disabled individuals in control of bionic assistive technology.*

The BCI-based race consisted of four brain-controlled avatars competing in a virtual racing game called “Brain Runners.” To accelerate each pilot’s avatar, they had to issue up to three mental commands (or intentional idling) on corresponding color-coded track segments.

Maximizing BCI performance by humanizing mutual learning

The two participants in the EPFL research had the best three times overall in the competition. One of those pilots won the gold medal and the other held the tournament record.

The researchers believe that with the mutual-learning approach, they have “maximized the chances for human learning by infrequent recalibration of the computer, leaving time for the human to better learn how to control the sensorimotor rhythms that would most efficiently evoke the desired avatar movement. Our results showcase strong and continuous learning effects at all targeted levels — machine, subject, and application — with both [participants] over a longitudinal study lasting several months,” the researchers conclude.

Reference (open-source): PLoS Biology May 10, 2018

* At Cybathlon, each team comprised a pilot together with scientists and technology providers of the functional and assistive devices used, which can be prototypes developed by research labs or companies, or commercially available products. That also makes Cybathlon a competition between companies and research laboratories. The next Cybathlon will be held in Zurich in 2020.

CYBATHLON by ETH Zurich

Two major advances in autonomous technologies that rival human abilities

The voice of singer John Legend, shown here at Google I/O, will be among six celebrity voices to come to Google Duplex voice technology and Google Assistant later this year, along with phones and home speakers. (credit: Google)

Google Duplex

Google’s new artificial-intelligence Google Duplex voice technology for natural conversations, introduced at the Google I/O event this past week, cleverly blurs the line between human and machine intelligence.

Here are two impressive examples of Duplex’s natural conversations on phone calls (using different voices):

Duplex scheduling a hair salon appointment:

Duplex calling a restaurant:

Google Duplex is designed* to make its voice on phone conversations sound natural** — “thanks to advances in understandinginteractingtiming, and speaking,” according to Google AI Blog. For example, it uses natural-sounding “hmm”s and “uh”s, and the appropriate latency (pause time) to match people’s expectations. “For example, after people say something simple, e.g., ’hello?,’ they expect an instant response.”

Google also said at its I/O developer conference that six new voices are coming to Google Duplex, including singer-songwriter-actor John Legend’s. Legend’s voice, among others, will also come to Google Assistant later this year, and will be included in phones and home speakers.

* “At the core of Duplex is a recurrent neural network (RNN) … built using [Google's] TensorFlow Extended (TFX).”

** To address “creepy” concerns right out of a Westworld show, a Google spokesperson provided an email statement to CNET: “We are designing this feature with disclosure built-in, and we’ll make sure the system is appropriately identified.”

Boston Dynamics

Boston Dynamics has announced two significant autonomous robot developments.

The dog-like SpotMini robot is now able to navigate a set path autonomously, as shown here (in addition to opening doors):

And the humanoid Atlas robot is now able to run and jump over objects:

 

 

 

 

 

 

 

FAA to team with local, state, and tribal governments and companies to develop safe drone operations

Future drone war as portrayed in “Call of Duty Black Ops 2” (credit: Activision Publishing)

U.S. Secretary of Transportation Elaine L. Chao announced today (May 9, 2018) that 10 state, local, and tribal governments have been selected* as participants in the U.S. Department of Transportation’s Unmanned Aircraft Systems (UAS) Integration Pilot Program.

The goal of the program: set up partnerships between the FAA and local, state and tribal governments. These will then partner with private sector participants to safely explore the further integration of drone operations.

“Data gathered from these pilot projects will form the basis of a new regulatory framework to safely integrate drones into our national airspace,” said Chao. Over the next two and a half years, the team will collect drone data involving night operations, flights over people and beyond the pilot’s line of sight, package delivery, detect-and-avoid technologies and the reliability and security of data links between pilot and aircraft.


North Carolina has been selected to test medical delivery with Zipline’s drones, which have been
tested in more than 4000 flights in Rwanda, according to
MIT Technology Review

At least 200 companies were approved to partner in the program, including Airbus, Intel, Qualcomm, Boeing, Ford Motor Co., Uber Technologies Inc., and Fedex (but not Amazon).

“At Memphis International Airport, drones may soon be inspecting planes and delivering airplane parts for FedEx Corp.,” reports Bloomberg. “In Virginia, drones operated by Alphabet’s Project Wing will be used to deliver goods to various communities and then researchers will get feedback from local residents. The data can be used to help develop regulations allowing widespread and routine deliveries sometime in the future.”


The city of Reno, Nevada is partnered with Nevada-based Flirtey, a company that has experimented with delivering defibrillators by drone.

In less than a decade, the potential economic benefit of integrating [unmanned aircraft systems] in the nation’s airspace is estimated at $82 billion and could create 100,000 jobs,” the announcement said. “Fields that could see immediate opportunities from the program include commerce, photography, emergency management, public safety, precision agriculture and infrastructure inspections.”

Criminals and terrorists already see immediate opportunities

But could making drones more accessible and ubiquitous have unintended consequences?

Consider these news reports:

  • A small 2-foot-long quadcopter — a drone with four propellers — crashed onto the White House grounds on January 26, 2015. The event raises some troubling questions about the possibility that terrorists using armed drones could one day attack the White House or other tightly guarded U.S. government locations. — CNN
  • ISIS flew over 300 drone missions in one month during the battle for Mosul, said Peter Singer, a senior fellow and strategist at the New America Foundation, during a November 2017 presentation. About one-third of those flights were armed strike missions. — C4ISRNET
  • ISIS released a propaganda video in 2017 showing them (allegedly) dropping a bomb on a Syrian army ammunition depot. — Vocativ
  • Footage obtained by the BBC shows a drone delivering drugs and mobile phones to London prisoners in April 2016. — BBC

“Last month the FAA said reports of drone-safety incidents, including flying improperly or getting too close to other aircraft, now average about 250 a month, up more than 50 percent from a year earlier,” according to a Nov. 2017 article by Bloomberg. “The reports include near-collisions described by pilots on airliners, law-enforcement helicopters or aerial tankers fighting wildfires.”

Worse, last winter, a criminal gang used a drone swarm to obstruct an FBI hostage raid, Defense One reported on May 3, 2018. The gang buzzed the hostage rescue team and fed video to the group’s other members via YouTube, according to Joe Mazel, the head of the FBI’s operational technology law unit.

“Some criminal organizations have begun to use drones as part of witness intimidation schemes: they continuously surveil police departments and precincts in order to see ‘who is going in and out of the facility and who might be co-operating with police,’ he revealed. … Drones are also playing a greater role in robberies and the like,” the article points out. “Beyond the well-documented incidence of house break-ins, criminal crews are using them to observe bigger target facilities, spot security gaps, and determine patterns of life: where the security guards go and when.

“In Australia, criminal groups have begun have used drones as part of elaborate smuggling schemes,” Mazel said. And Andrew Scharnweber, associate chief of U.S. Customs and Border Protection, “described how criminal networks were using drones to watch Border Patrol officers, identify their gaps in coverage, and exploit them. Cartels are able to move small amounts of high-value narcotics across the border via drones with ‘little or no fear of arrest,’ he said.”

Congressional bill H.R. 4: FAA Reauthorization Act of 2018 attempts to address these problems by making it illegal to “weaponize” consumer drones and would require drones that fly beyond their operators’ line of sight to broadcast an identity code, allowing law enforcement to track and connect them to a real person, the article noted.

How terrorists could use AI-enhanced autonomous drones


The Campaign to Stop Killer Robots, a coalition of AI researchers and advocacy organizations, released this fictional video to depict a disturbing future in which lethal autonomous weapons have become cheap and ubiquitous worldwide.

But the next generation of drones might use AI-enabled swarming to become even more powerful and deadlier, in addition to self-driving vehicles for their next car bombs or assassinations, Defense One warned in another article on May 3, 2018.

“Max Tegmark’s book Life 3.0 notes the concern of UC Berkeley computer scientist Stuart Russell, who worries that the biggest winners from an AI arms race would be ‘small rogue states and non-state actors such as terrorists’ who can access these weapons through the black market,” the article notes.

“Tegmark writes that they are ‘mass-produced, small AI-powered killer drones are likely to cost little more than a smartphone.’ Would-be assassins could simply ‘upload their target’s photo and address into the killer drone: it can then fly to the destination, identify and eliminate the person, and self-destruct to ensure that nobody knows who was responsible.’”

* The 10 selectees are:

  • Choctaw Nation of Oklahoma, Durant, OK
  • City of San Diego, CA
  • Virginia Tech – Center for Innovative Technology, Herndon, VA
  • Kansas Department of Transportation, Topeka, KS
  • Lee County Mosquito Control District, Ft. Myers, FL
  • Memphis-Shelby County Airport Authority, Memphis, TN
  • North Carolina Department of Transportation, Raleigh, NC
  • North Dakota Department of Transportation, Bismarck, ND
  • City of Reno, NV
  • University of Alaska-Fairbanks, Fairbanks, AK

Three dramatic new ways to visualize brain tissue and neuron circuits

Visualizing the brain: Here, tissue from a human dentate gyrus (a part of the brain’s hippocampus that is involved in the formation of new memories) was imaged transparently in 3D and colored-coded to reveal the distribution and types of nerve cells. (credit: The University of Hong Kong)

Visualizing human brain tissue in vibrant transparent colors

Neuroscientists from The University of Hong Kong (HKU) and Imperial College London have developed a new method called “OPTIClear” for 3D transparent color visualization (at the microscopic level) of complex human brain circuits.

To understand how the brain works, neuroscientists map how neurons (nerve cells) are wired to form circuits in both healthy and disease states. To do that, the scientists typically cut brain tissues into thin slices. Then they trace the entangled fibers across those slices — a complex, laborious process.

Making human tissues transparent. OPTIClear replaces that process by “clearing” (making tissues transparent) and using fluorescent staining to identify different types of neurons. In one study of more than 3,000 large neurons in the human basal forebrain, the researchers were able to reduce the time from about three weeks to five days to visualize neurons, glial cells, and blood vessels in exquisite 3D detail. Previous clearing methods (such as CLARITY) have been limited to rodent tissue.

Reference (open access): Nature Communications March 14, 2018. Source: HKU and Imperial College London, May 7, 2018

Watching millions of brain cells in a moving animal for the first time

Neurons in the hippocampus flash on and off as a mouse walks around with tiny camera lenses on its head. (credit: The Rockefeller University)

It’s a neuroscientist’s dream: being able to track the millions of interactions among brain cells in animals that move about freely — allowing for studying brain disorders. Now a new invention, developed at The Rockefeller University and reported today, is expected to give researchers a dynamic tool to do that just that, eventually in humans.

The new tool can track neurons located at different depths within a volume of brain tissue in a freely moving rodent, or record the interplay among neurons when two animals meet and interact socially.

Microlens array for 3D recording. The technology consists of a tiny microscope attached to a mouse’s head, with a group of lenses called a “microlens array.” These lenses enable the microscope to capture images from multiple angles and depths on a sensor chip, producing a three-dimensional record of neurons blinking on and off as they communicate with each other through electrochemical impulses. (The mouse neurons are genetically modified to light up when they become activated.) A cable attached to the top of the microscope transmits the data for recording.

One challenge: Brain tissue is opaque, making light scatter, which makes it difficult to pinpoint the source of each neuronal light flash. The researchers’ solution: a new computer algorithm (program), known as SID, that extracts additional information from the scattered emission light.

Reference: Nature Methods. Source: The Rockefeller University May 7, 2018

Brain cells interacting in real time

Illustration: An astrocyte (green) interacts with a synapse (red), producing an optical signal (yellow). (credit: UCLA/Khakh lab)

Researchers at the David Geffen School of Medicine at UCLA can now peer deep inside a mouse’s brain to watch how star-shaped astrocytes (support glial cells in the brain) interact with synapses (the junctions between neurons) to signal each other and convey messages.

The method uses different colors of light that pass through a lens to magnify objects that are invisible to the naked eye. The viewable objects are now far smaller than those viewable by earlier techniques. That enables researchers to observe how brain damage alters the way astrocytes interact with neurons, and develop strategies to address these changes, for example.

Astrocytes are believed to play a key role in neurological disorders like Lou Gehrig’s, Alzheimer’s, and Huntington’s disease.

Reference: Neuron. Source: UCLA Khakh lab April 4, 2018.

round-up | Hawking’s radical instant-universe-as-hologram theory and the scary future of information warfare

A timeline of the Universe based on the cosmic inflation theory (credit: WMAP science team/NASA)

Stephen Hawking’s final cosmology theory says the universe was created instantly (no inflation, no singularity) and it’s a hologram

There was no singularity just after the big bang (and thus, no eternal inflation) — the universe was created instantly. And there were only three dimensions. So there’s only one finite universe, not a fractal or a multiverse — and we’re living in a projected hologram. That’s what Hawking and co-author Thomas Hertog (a theoretical physicist at the Catholic University of Leuven) have concluded — contradicting Hawking’s former big-bang singularity theory (with time as a dimension).

Problem: So how does time finally emerge? “There’s a lot of work to be done,” admits Hertog. Citation (open access): Journal of High Energy Physics, May 2, 2018. Source (open access): Science, May 2, 2018


Movies capture the dynamics of an RNA molecule from the HIV-1 virus. (photo credit: Yu Xu et al.)

Molecular movies of RNA guide drug discovery — a new paradigm for drug discovery

Duke University scientists have invented a technique that combines nuclear magnetic resonance imaging and computationally generated movies to capture the rapidly changing states of an RNA molecule.

It could lead to new drug targets and allow for screening millions of potential drug candidates. So far, the technique has predicted 78 compounds (and their preferred molecular shapes) with anti-HIV activity, out of 100,000 candidate compounds. Citation: Nature Structural and Molecular Biology, May 4, 2018. Source: Duke University, May 4, 2018.


Chromium tri-iodide magnetic layers between graphene conductors. By using four layers, the storage density could be multiplied. (credit: Tiancheng Song)

Atomically thin magnetic memory

University of Washington scientists have developed the first 2D (in a flat plane) atomically thin magnetic memory — encoding information using magnets that are just a few layers of atoms in thickness — a miniaturized, high-efficiency alternative to current disk-drive materials.

In an experiment, the researchers sandwiched two atomic layers of chromium tri-iodide (CrI3) — acting as memory bits — between graphene contacts and measured the on/off electron flow through the atomic layers.

The U.S. Dept. of Energy-funded research could dramatically increase future data-storage density while reducing energy consumption by orders of magnitude. Citation: Science, May 3, 2018. Source: University of Washington, May 3, 2018.


Definitions of artificial intelligence (credit: House of Lords Select Committee on Artificial Intelligence)

A Magna Carta for the AI age

A report by the House of Lords Select Committee on Artificial Intelligence in the U.K. lays out “an overall charter for AI that can frame practical interventions by governments and other public agencies.”

The key elements: Be developed for the common good. Operate on principles of intelligibility and fairness: users must be able to easily understand the terms under which their personal data will be used. Respect rights to privacy. Be grounded in far-reaching changes to education. Teaching needs reform to utilize digital resources, and students must learn not only digital skills but also how to develop a critical perspective online. Never be given the autonomous power to hurt, destroy or deceive human beings.

Source: The Washington Post, May 2, 2018.


(credit: CB Insights)

The future of information warfare

Memes and social networks have become weaponized, but many governments seem ill-equipped to understand the new reality of information warfare.

The weapons include: Computational propaganda: digitizing the manipulation of public opinion; advanced digital deception technologies; malicious AI impersonating and manipulating people; and AI-generated fake video and audio. Counter-weapons include: Spotting AI-generated people; uncovering hidden metadata to authenticate images and videos; blockchain for tracing digital content back to the source; and detecting image and video manipulation at scale.

Source (open-access): CB Insights Research Brief, May 3, 2018.