Saturn moon Titan has chemical that could form bio-like ‘membranes’ says NASA

Molecules of vinyl cyanide reside in the atmosphere of Titan, Saturn’s largest moon, says NASA. Titan is shown here in an optical (atmosphere) infrared (surface) composite from NASA’s Cassini spacecraft. Titan’s atmosphere is a veritable chemical factory, harnessing the light of the sun and the energy from fast-moving particles that orbit around Saturn to convert simple organic molecules into larger, more complex chemicals. (credit: B. Saxton (NRAO/AUI/NSF); NASA)

NASA researchers have found large quantities (2.8 parts per billion) of acrylonitrile* (vinyl cyanide, C2H3CN) in Titan’s atmosphere that could self-assemble as a sheet of material similar to a cell membrane.

Acrylonitrile (credit: NASA Goddard)

Consider these findings, presented July 28, 2017 in the open-access journal Science Advances, based on data from the ALMA telescope in Chile (and confirming earlier observations by NASA’s Cassini spacecraft):

Azotozome illustration (credit: James Stevenson/Cornell)

1. Researchers have proposed that acrylonitrile molecules could come together as a sheet of material similar to a cell membrane. The sheet could form a hollow, microscopic sphere that they dubbed an “azotosome.”

A bilayer, made of two layers of lipid molecules (credit: Mariana Ruiz Villarreal/CC)

2. The azotosome sphere could serve as a tiny storage and transport container, much like the spheres that biological lipid bilayers can form. The thin, flexible lipid bilayer is the main component of the cell membrane, which separates the inside of a cell from the outside world.

“The ability to form a stable membrane to separate the internal environment from the external one is important because it provides a means to contain chemicals long enough to allow them to interact,” said Michael Mumma, director of the Goddard Center for Astrobiology, which is funded by the NASA Astrobiology Institute.

Organic rain falling on a methane sea on Titan (artist’s impression) (credit: NASA Goddard)

3. Acrylonitrile condenses in the cold lower atmosphere and rains onto its solid icy surface, ending up in seas of methane liquids on its surface.

Illustration showing organic compounds in Titan’s seas and lakes (ESA)

4. A lake on Titan named Ligeia Mare that could have accumulated enough acrylonitrile to form about 10 million azotosomes in every milliliter (quarter-teaspoon) of liquid. Compare that to roughly a million bacteria per milliliter of coastal ocean water on Earth.

Chemistry in Titan’s atmosphere. Nearly as large as Mars, Titan has a hazy atmosphere made up mostly of nitrogen with a smattering of organic, carbon-based molecules, including methane (CH4) and ethane (C2H6). Planetary scientists theorize that this chemical make-up is similar to Earth’s primordial atmosphere. The conditions on Titan, however, are not conducive to the formation of life as we know it; it’s simply too cold (95 kelvins or -290 degrees Fahrenheit). (credit: ESA)

6. A related open-access study published July 26, 2017 in The Astrophysical Journal Letters notes that Cassini has also made the surprising detection of negatively charged molecules known as “carbon chain anions” in Titan’s upper atmosphere. These molecules are understood to be building blocks towards more complex molecules, and may have acted as the basis for the earliest forms of life on Earth.

“This is a known process in the interstellar medium, but now we’ve seen it in a completely different environment, meaning it could represent a universal process for producing complex organic molecules,” says Ravi Desai of University College London and lead author of the study.

* On Earth, acrylonitrile  is used in manufacturing of plastics.


NASA Goddard | A Titan Discovery


Abstract of ALMA detection and astrobiological potential of vinyl cyanide on Titan

Recent simulations have indicated that vinyl cyanide is the best candidate molecule for the formation of cell membranes/vesicle structures in Titan’s hydrocarbon-rich lakes and seas. Although the existence of vinyl cyanide (C2H3CN) on Titan was previously inferred using Cassini mass spectrometry, a definitive detection has been lacking until now. We report the first spectroscopic detection of vinyl cyanide in Titan’s atmosphere, obtained using archival data from the Atacama Large Millimeter/submillimeter Array (ALMA), collected from February to May 2014. We detect the three strongest rotational lines of C2H3CN in the frequency range of 230 to 232 GHz, each with >4σ confidence. Radiative transfer modeling suggests that most of the C2H3CN emission originates at altitudes of ≳200 km, in agreement with recent photochemical models. The vertical column densities implied by our best-fitting models lie in the range of 3.7 × 1013 to 1.4 × 1014 cm−2. The corresponding production rate of vinyl cyanide and its saturation mole fraction imply the availability of sufficient dissolved material to form ~107 cell membranes/cm3 in Titan’s sea Ligeia Mare.

Disney Research’s ‘Magic Bench’ makes augmented reality a headset-free group experience

Magic Bench (credit: Disney Research)

Disney Research has created the first shared, combined augmented/mixed-reality experience, replacing first-person head-mounted displays or handheld devices with a mirrored image on a large screen — allowing people to share the magical experience as a group.

Sit on Disney Research’s Magic Bench and you may see an elephant hand you a glowing orb, hear its voice, and feel it sit down next to you, for example. Or you might get rained on and find yourself underwater.

How it works

Flowchart of the Magic Bench installation (credit: Disney Research)

People seated on the Magic Bench can see themselves on a large video display in front of them. The scene is reconstructed using a combined depth sensor/video camera (Microsoft‰ Kinect) to image participants, bench, and surroundings. An image of the participants is projected on a large screen, allowing them to occupy the same 3D space as a computer-generated character or object. The system can also infer participants’ gaze.*

Speakers and haptic sensors built into the bench add to the experience (by vibrating the bench when the elephant sits down in this example).

The research team will present and demonstrate the Magic Bench at SIGGRAPH 2017, the Computer Graphics and Interactive Techniques Conference, which began Sunday, July 30 in Los Angeles.

* To eliminate depth shadows that occur in areas where the depth sensor has no corresponding line of sight with the color camera, a modified algorithm creates a 2D backdrop, according to the researchers. The 3D and 2D reconstructions are positioned in virtual space and populated with 3D characters and effects in such a way that the resulting real-time rendering is a seamless composite, fully capable of interacting with virtual physics, light, and shadows.


DisneyResearchHub | Magic Bench


Abstract of Magic Bench

Mixed Reality (MR) and Augmented Reality (AR) create exciting opportunities to engage users in immersive experiences, resulting in natural human-computer interaction. Many MR interactions are generated around a €first-person Point of View (POV). In these cases, the user directs to the environment, which is digitally displayed either through a head-mounted display or a handheld computing device. One drawback of such conventional AR/MR platforms is that the experience is user-specific. Moreover, these platforms require the user to wear and/or hold an expensive device, which can be cumbersome and alter interaction techniques. We create a solution for multi-user interactions in AR/MR, where a group can share the same augmented environment with any computer generated (CG) asset and interact in a shared story sequence through a third-person POV. Our approach is to instrument the environment leaving the user unburdened of any equipment, creating a seamless walk-up-and-play experience. We demonstrate this technology in a series of vignettes featuring humanoid animals. Participants can not only see and hear these characters, they can also feel them on the bench through haptic feedback. Many of the characters also interact with users directly, either through speech or touch. In one vignettŠe an elephant hands a participant a glowing orb. ŒThis demonstrates HCI in its simplest form: a person walks up to a computer, and the computer hands the person an object.

A living programmable biocomputing device based on RNA

“Ribocomputing devices” ( yellow) developed by a team at the Wyss Institute can now be used by synthetic biologists to sense and interpret multiple signals in cells and logically instruct their ribosomes (blue and green) to produce different proteins. (credit: Wyss Institute at Harvard University)

Synthetic biologists at Harvard’s Wyss Institute for Biologically Inspired Engineering and associates have developed a living programmable “ribocomputing” device based on networks of precisely designed, self-assembling synthetic RNAs (ribonucleic acid). The RNAs can sense multiple biosignals and make logical decisions to control protein production with high precision.

As reported in Nature, the synthetic biological circuits could be used to produce drugs, fine chemicals, and biofuels or detect disease-causing agents and release therapeutic molecules inside the body. The low-cost diagnostic technologies may even lead to nanomachines capable of hunting down cancer cells or switching off aberrant genes.

Biological logic gates

Similar to a digital circuit, these synthetic biological circuits can process information and make logic-guided decisions, using basic logic operations — AND, OR, and NOT. But instead of detecting voltages, the decisions are based on specific chemicals or proteins, such as toxins in the environment, metabolite levels, or inflammatory signals. The specific ribocomputing parts can be readily designed on a computer.

E. coli bacteria engineered to be ribocomputing devices output a green-glowing protein when they detect a specific set of programmed RNA molecules as input signals (credit: Harvard University)

The research was performed with E. coli bacteria, which regulate the expression of a fluorescent (glowing) reporter protein when the bacteria encounter a specific complex set of intra-cellular stimuli. But the researchers believe ribocomputing devices can work with other host organisms or in extracellular settings.

Previous synthetic biological circuits have only been able to sense a handful of signals, giving them an incomplete picture of conditions in the host cell. They are also built out of different types of molecules, such as DNAs, RNAs, and proteins, that must find, bind, and work together to sense and process signals. Identifying molecules that cooperate well with one another is difficult and makes development of new biological circuits a time-consuming and often unpredictable process.

Brain-like neural networks next

Ribocomputing devices could also be freeze-dried on paper, leading to paper-based biological circuits, including diagnostics that can sense and integrate several disease-relevant signals in a clinical sample, the researchers say.

The next stage of research will focus on the use of RNA “toehold” technology* to produce neural networks within living cells — circuits capable of analyzing a range of excitatory and inhibitory inputs, averaging them, and producing an output once a particular threshold of activity is reached. (Similar to how a neuron averages incoming signals from other neurons.)

Ultimately, researchers hope to induce cells to communicate with one another via programmable molecular signals, forming a truly interactive, brain-like network, according to lead author Alex Green, an assistant professor at Arizona State University’s Biodesign Institute.

Wyss Institute Core Faculty member Peng Yin, Ph.D., who led the study, is also Professor of Systems Biology at Harvard Medical School.

The study was funded by the Wyss Institute’s Molecular Robotics Initiative, a Defense Advanced Research Projects Agency (DARPA) Living Foundries grant, and grants from the National Institute of Health (NIH), the Office of Naval Research (ONR), the National Science Foundation (NSF) and the Defense Threat Reduction Agency (DTRA).

* The team’s approach evolved from its previous development of “toehold switches” in 2014 — programmable hairpin-like nano-structures made of RNA. In principle, RNA toehold wwitches can control the production of a specific protein: when a desired complementary “trigger” RNA, which can be part of the cell’s natural RNA repertoire, is present and binds to the toehold switch, the hairpin structure breaks open. Only then will the cell’s ribosomes get access to the RNA and produce the desired protein.


Wyss Institute | Mechanism of the Toehold Switch


Abstract of Complex cellular logic computation using ribocomputing devices

Synthetic biology aims to develop engineering-driven approaches to the programming of cellular functions that could yield transformative technologies. Synthetic gene circuits that combine DNA, protein, and RNA components have demonstrated a range of functions such as bistability, oscillation, feedback, and logic capabilities. However, it remains challenging to scale up these circuits owing to the limited number of designable, orthogonal, high-performance parts, the empirical and often tedious composition rules, and the requirements for substantial resources for encoding and operation. Here, we report a strategy for constructing RNA-only nanodevices to evaluate complex logic in living cells. Our ‘ribocomputing’ systems are composed of de-novo-designed parts and operate through predictable and designable base-pairing rules, allowing the effective in silico design of computing devices with prescribed configurations and functions in complex cellular environments. These devices operate at the post-transcriptional level and use an extended RNA transcript to co-localize all circuit sensing, computation, signal transduction, and output elements in the same self-assembled molecular complex, which reduces diffusion-mediated signal losses, lowers metabolic cost, and improves circuit reliability. We demonstrate that ribocomputing devices in Escherichia coli can evaluate two-input logic with a dynamic range up to 900-fold and scale them to four-input AND, six-input OR, and a complex 12-input expression (A1 AND A2 AND NOT A1*) OR (B1 AND B2 AND NOT B2*) OR (C1 AND C2) OR (D1 AND D2) OR (E1 AND E2). Successful operation of ribocomputing devices based on programmable RNA interactions suggests that systems employing the same design principles could be implemented in other host organisms or in extracellular settings.

How to run faster, smarter AI apps on smartphones

(credit: iStock)

When you use smartphone AI apps like Siri, you’re dependent on the cloud for a lot of the processing — limited by your connection speed. But what if your smartphone could do more of the processing directly on your device — allowing for smarter, faster apps?

MIT scientists have taken a step in that direction with a new way to enable artificial-intelligence systems called convolutional neural networks (CNNs) to run locally on mobile devices. (CNN’s are used in areas such as autonomous driving, speech recognition, computer vision, and automatic translation.) Neural networks take up a lot of memory and consume a lot of power, so they usually run on servers in the cloud, which receive data from desktop or mobile devices and then send back their analyses.

The new MIT analytic method can determine how much power a neural network will actually consume when run on a particular type of hardware. The researchers used the method to evaluate new techniques for paring down neural networks so that they’ll run more efficiently on handheld devices.

The new CNN designs are also optimized to run on an energy-efficient computer chip optimized for neural networks that the researchers developed in 2016.

Reducing energy consumption

The new MIT software method uses “energy-aware pruning” — meaning they reduce a neural networks’ power consumption by cutting out the layers of the network that contribute very little to a neural network’s final output and consume the most energy.

Associate professor of electrical engineering and computer science Vivienne Sze and colleagues describe the work in an open-access paper they’re presenting this week (of July 24, 2017) at the Computer Vision and Pattern Recognition Conference. They report that the methods offered up to 73 percent reduction in power consumption over the standard implementation of neural networks — 43 percent better than the best previous method.

Meanwhile, another MIT group at the Computer Science and Artificial Intelligence Laboratory has designed a hardware approach to reduce energy consumption and increase computer-chip processing speed for specific apps, using “cache hierarchies.” (“Caches” are small, local memory banks that store data that’s frequently used by computer chips to cut down on time- and energy-consuming communication with off-chip memory.)**

The researchers tested their system on a simulation of a chip with 36 cores, or processing units. They found that compared to its best-performing predecessors, the system increased processing speed by 20 to 30 percent while reducing energy consumption by 30 to 85 percent. They presented the new system, dubbed Jenga, in an open-access paper at the International Symposium on Computer Architecture earlier in July 2017.

Better batteries — or maybe, no battery?

Another solution to better mobile AI is improving rechargeable batteries in cell phones (and other mobile devices), which have limited charge capacity and short lifecycles, and perform poorly in cold weather.

Recently, DARPA-funded researchers from the University of Houston (and at the University of California-San Diego and Northwestern University) have discovered that quinones — an inexpensive, earth-abundant and easily recyclable material that is low-cost and nonflammable — can address current battery limitations.

“One of these batteries, as a car battery, could last 10 years,” said Yan Yao, associate professor of electrical and computer engineering. In addition to slowing the deterioration of batteries for vehicles and stationary electricity storage batteries, it also would make battery disposal easier because the material does not contain heavy metals. The research is described in Nature Materials.

The first battery-free cellphone that can send and receive calls using only a few microwatts of power. (credit: Mark Stone/University of Washington)

But what if we eliminated batteries altogether? University of Washington researchers have invented a cellphone that requires no batteries. Instead, it harvests 3.5 microwatts of power from ambient radio signals, light, or even the vibrations of a speaker.

The new technology is detailed in a paper published July 1, 2017 in the Proceedings of the Association for Computing Machinery on Interactive, Mobile, Wearable and Ubiquitous Technologies.

The UW researchers demonstrated how to harvest this energy from ambient radio signals transmitted by a WiFi base station up to 31 feet away. “You could imagine in the future that all cell towers or Wi-Fi routers could come with our base station technology embedded in it,” said co-author Vamsi Talla, a former UW electrical engineering doctoral student and Allen School research associate. “And if every house has a Wi-Fi router in it, you could get battery-free cellphone coverage everywhere.”

A cellphone CPU (computer processing unit) typically requires several watts or more (depending on the app), so we’re not quite there yet. But that power requirement could one day be sufficiently reduced by future special-purpose chips and MIT’s optimized algorithms.

It might even let you do amazing things. :)

* Loosely based on the anatomy of the brain, neural networks consist of thousands or even millions of simple but densely interconnected information-processing nodes, usually organized into layers. The connections between nodes have “weights” associated with them, which determine how much a given node’s output will contribute to the next node’s computation. During training, in which the network is presented with examples of the computation it’s learning to perform, those weights are continually readjusted, until the output of the network’s last layer consistently corresponds with the result of the computation. With the proposed pruning method, the energy consumption of AlexNet and GoogLeNet are reduced by 3.7x and 1.6x, respectively, with less than 1% top-5 accuracy loss.

** The software reallocates cache access on the fly to reduce latency (delay), based on the physical locations of the separate memory banks that make up the shared memory cache. If multiple cores are retrieving data from the same DRAM [memory] cache, this can cause bottlenecks that introduce new latencies. So after Jenga has come up with a set of cache assignments, cores don’t simply dump all their data into the nearest available memory bank; instead, Jenga parcels out the data a little at a time, then estimates the effect on bandwidth consumption and latency. 

*** The stumbling block, Yao said, has been the anode, the portion of the battery through which energy flows. Existing anode materials are intrinsically structurally and chemically unstable, meaning the battery is only efficient for a relatively short time. The differing formulations offer evidence that the material is an effective anode for both acid batteries and alkaline batteries, such as those used in a car, as well as emerging aqueous metal-ion batteries.

Is anyone home? A way to find out if AI has become self-aware

(credit: Gerd Altmann/Pixabay)

By Susan Schneider, PhD, and Edwin Turner, PhD

Every moment of your waking life and whenever you dream, you have the distinct inner feeling of being “you.” When you see the warm hues of a sunrise, smell the aroma of morning coffee or mull over a new idea, you are having conscious experience. But could an artificial intelligence (AI) ever have experience, like some of the androids depicted in Westworld or the synthetic beings in Blade Runner?

The question is not so far-fetched. Robots are currently being developed to work inside nuclear reactors, fight wars and care for the elderly. As AIs grow more sophisticated, they are projected to take over many human jobs within the next few decades. So we must ponder the question: Could AIs develop conscious experience?

This issue is pressing for several reasons. First, ethicists worry that it would be wrong to force AIs to serve us if they can suffer and feel a range of emotions. Second, consciousness could make AIs volatile or unpredictable, raising safety concerns (or conversely, it could increase an AI’s empathy; based on its own subjective experiences, it might recognize consciousness in us and treat us with compassion).

Third, machine consciousness could impact the viability of brain-implant technologies, like those to be developed by Elon Musk’s new company, Neuralink. If AI cannot be conscious, then the parts of the brain responsible for consciousness could not be replaced with chips without causing a loss of consciousness. And, in a similar vein, a person couldn’t upload their brain to a computer to avoid death, because that upload wouldn’t be a conscious being.

In addition, if AI eventually out-thinks us yet lacks consciousness, there would still be an important sense in which we humans are superior to machines; it feels like something to be us. But the smartest beings on the planet wouldn’t be conscious or sentient.

A lot hangs on the issue of machine consciousness, then. Yet neuroscientists are far from understanding the basis of consciousness in the brain, and philosophers are at least equally far from a complete explanation of the nature of consciousness.

A test for machine consciousness

So what can be done? We believe that we do not need to define consciousness formally, understand its philosophical nature or know its neural basis to recognize indications of consciousness in AIs. Each of us can grasp something essential about consciousness, just by introspecting; we can all experience what it feels like, from the inside, to exist.

(credit: Gerd Altmann/Pixabay)

Based on this essential characteristic of consciousness, we propose a test for machine consciousness, the AI Consciousness Test (ACT), which looks at whether the synthetic minds we create have an experience-based understanding of the way it feels, from the inside, to be conscious.

One of the most compelling indications that normally functioning humans experience consciousness, although this is not often noted, is that nearly every adult can quickly and readily grasp concepts based on this quality of felt consciousness. Such ideas include scenarios like minds switching bodies (as in the film Freaky Friday); life after death (including reincarnation); and minds leaving “their” bodies (for example, astral projection or ghosts). Whether or not such scenarios have any reality, they would be exceedingly difficult to comprehend for an entity that had no conscious experience whatsoever. It would be like expecting someone who is completely deaf from birth to appreciate a Bach concerto.

Thus, the ACT would challenge an AI with a series of increasingly demanding natural language interactions to see how quickly and readily it can grasp and use concepts and scenarios based on the internal experiences we associate with consciousness. At the most elementary level we might simply ask the machine if it conceives of itself as anything other than its physical self.

At a more advanced level, we might see how it deals with ideas and scenarios such as those mentioned in the previous paragraph. At an advanced level, its ability to reason about and discuss philosophical questions such as “the hard problem of consciousness” would be evaluated. At the most demanding level, we might see if the machine invents and uses such a consciousness-based concept on its own, without relying on human ideas and inputs.

Consider this example, which illustrates the idea: Suppose we find a planet that has a highly sophisticated silicon-based life form (call them “Zetas”). Scientists observe them and ponder whether they are conscious beings. What would be convincing proof of consciousness in this species? If the Zetas express curiosity about whether there is an afterlife or ponder whether they are more than just their physical bodies, it would be reasonable to judge them conscious. If the Zetas went so far as to pose philosophical questions about consciousness, the case would be stronger still.

There are also nonverbal behaviors that could indicate Zeta consciousness such as mourning the dead, religious activities or even turning colors in situations that correlate with emotional challenges, as chromatophores do on Earth. Such behaviors could indicate that it feels like something to be a Zeta.

The death of the mind of the fictional HAL 9000 AI computer in Stanley Kubrick’s 2001: A Space Odyssey provides another illustrative example. The machine in this case is not a humanoid robot as in most science fiction depictions of conscious machines; it neither looks nor sounds like a human being (a human did supply HAL’s voice, but in an eerily flat way). Nevertheless, the content of what it says as it is deactivated by an astronaut — specifically, a plea to spare it from impending “death” — conveys a powerful impression that it is a conscious being with a subjective experience of what is happening to it.

Could such indicators serve to identify conscious AIs on Earth? Here, a potential problem arises. Even today’s robots can be programmed to make convincing utterances about consciousness, and a truly superintelligent machine could perhaps even use information about neurophysiology to infer the presence of consciousness in humans. If sophisticated but non-conscious AIs aim to mislead us into believing that they are conscious for some reason, their knowledge of human consciousness could help them do so.

We can get around this though. One proposed technique in AI safety involves “boxing in” an AI—making it unable to get information about the world or act outside of a circumscribed domain, that is, the “box.” We could deny the AI access to the internet and indeed prohibit it from gaining any knowledge of the world, especially information about conscious experience and neuroscience.

(credit: Gerd Altmann/Pixabay)

Some doubt a superintelligent machine could be boxed in effectively — it would find a clever escape. We do not anticipate the development of superintelligence over the next decade, however. Furthermore, for an ACT to be effective, the AI need not stay in the box for long, just long enough administer the test.

ACTs also could be useful for “consciousness engineering” during the development of different kinds of AIs, helping to avoid using conscious machines in unethical ways or to create synthetic consciousness when appropriate.

Beyond the Turing Test

An ACT resembles Alan Turing’s celebrated test for intelligence, because it is entirely based on behavior — and, like Turing’s, it could be implemented in a formalized question-and-answer format. (An ACT could also be based on an AI’s behavior or on that of a group of AIs.)

But an ACT is also quite unlike the Turing test, which was intended to bypass any need to know what was transpiring inside the machine. By contrast, an ACT is intended to do exactly the opposite; it seeks to reveal a subtle and elusive property of the machine’s mind. Indeed, a machine might fail the Turing test because it cannot pass for human, but pass an ACT because it exhibits behavioral indicators of consciousness.

This is the underlying basis of our ACT proposal. It should be said, however, that the applicability of an ACT is inherently limited. An AI could lack the linguistic or conceptual ability to pass the test, like a nonhuman animal or an infant, yet still be capable of experience. So passing an ACT is sufficient but not necessary evidence for AI consciousness — although it is the best we can do for now. It is a first step toward making machine consciousness accessible to objective investigations.

So, back to the superintelligent AI in the “box” — we watch and wait. Does it begin to philosophize about minds existing in addition to bodies, like Descartes? Does it dream, as in Isaac Asimov’s Robot Dreams? Does it express emotion, like Rachel in Blade Runner? Can it readily understand the human concepts that are grounded in our internal conscious experiences, such as those of the soul or atman?

The age of AI will be a time of soul-searching — both of ours, and for theirs.

Originally published in Scientific American, July 19, 2017

Susan Schneider, PhD, is a professor of philosophy and cognitive science at the University of Connecticut, a researcher at YHouse, Inc., in New York, a member of the Ethics and Technology Group at Yale University and a visiting member at the Institute for Advanced Study at Princeton. Her books include The Language of Thought, Science Fiction and Philosophy, and The Blackwell Companion to Consciousness (with Max Velmans). She is featured in the new film, Supersapiens, the Rise of the Mind.

Edwin L. Turner, PhD, is a professor of Astrophysical Sciences at Princeton University, an Affiliate Scientist at the Kavli Institute for the Physics and Mathematics of the Universe at the University of Tokyo, a visiting member in the Program in Interdisciplinary Studies at the Institute for Advanced Study in Princeton, and a co-founding Board of Directors member of YHouse, Inc. Recently he has been an active participant in the Breakthrough Starshot Initiative. He has taken an active interest in artificial intelligence issues since working in the AI Lab at MIT in the early 1970s.

Supersapiens, the Rise of the Mind

(credit: Markus Mooslechner)

In the new film Supersapiens, writer-director Markus Mooslechner raises a core question: As artificial intelligence rapidly blurs the boundaries between man and machine, are we witnessing the rise of a new human species?

“Humanity is facing a turning point — the next evolution of the human mind,” notes Mooslechner. “Will this evolution be a hybrid of man and machine, where artificial intelligence forces the emergence of a new human species? Or will a wave of new technologists, who frame themselves as ‘consciousness-hackers,’ become the future torch-bearers, using technology not to replace the human mind, but rather awaken within it powers we have always possessed — enlightenment at the push of a button?”

“It’s not obvious to me that a replacement of our species by our own technological creation would necessarily be a bad thing,” says ethologist-evolutionary biologist-author Dawkins in the film.

Supersapiens in a Terra Mater Factual Studios production. Executive Producers are Joanne Reay and Walter Koehler. Distribution is to be announced.

Cast:

  • Mikey Siegel, Consciousness Hacker, San FranciscoSam Harris, Neuroscientist, Philosopher
  • Ben Goertzel, Chief Scientist    , Hanson Robotics, Hong Kong
  • Hugo de Garis, retired director of China Brain Project, Xiamen, China
  • Susan Schneider, Philosopher and cognitive scientist University of Connecticut
  • Joel Murphy, owner, OpenBCI, Brooklyn, New York
  • Tim Mullen, Neuroscientist, CEO / Research Director, Qusp Labs
  • Conor Russomanno, CEO, OpenBCI, Brooklyn, New York
  • David Putrino, Neuroscientist, Weill-Cornell Medical College, New York
  • Hannes Sjoblad, Tech Activist, Bodyhacker, Stockholm Sweden.
  • Richard Dawkins, Evolutionary Biologist, Author, Oxford, UK
  • Nick Bostrom, Philosopher, Future of Humanity Institute, Oxford University, UK
  • Anders Sandberg, Computational Neuroscientist, Oxford University, UK
  • Adam Gazzaley, Neuroscientist, Executive Director UCSF Neuroscape, San Francisco, USA
  • Andy Walshe, Director Red Bull High Performance, Santa Monica, USA
  • Randal Koene, Science Director, Carboncopies Science Director, San Francisco


Markus Mooslechner | Supersapiens teaser

Alphabet’s X announces Glass Enterprise Edition, a hands-free device for hands-on workers

Glass Enterprise Edition (credit: X)

Alphabet’s X announced today Glass Enterprise Edition (EE) — an augmented-reality device targeted mainly to hands-on workers.

Glass EE is an improved version of the “Explorer Edition” — an experimental 2013 corporate version of the original Glass product.

On the left is an assembly engine manual that GE’s mechanics used to consult. Now they use Glass Enterprise Edition on the right. (credit: X)

On January 2015, the Enterprise team in X quietly began shipping the Enterprise Edition to corporate solution partners like GE and DHL.

Now, there are more than 50 businesses, including AGCO, Dignity HealthNSF InternationalSutter Health, The Boeing Company, and Volkswagen, all of who have been using Glass to complete their work faster and more easily than before, the X blog reports.

Workers can access training videos, images annotated with instructions, or quality assurance checklists, for example, or invite others to “see what you see” through a live video stream so you can collaborate and troubleshoot in real-time.

AGCO workers use Glass to see assembly instructions, make reports and get remote video support. (credit: X)

Glass EE enables workers to scan a machine’s serial number to instantly bring up a manual, photo, or video they may need to build a tractor. (credit: AGCO)

Significant improvements

The new “Glass 2.0″ design makes significant improvements over the original Glass, according to Jay Kothari, project lead on the Glass enterprise team, as reported by Wired. It’s accessible for those who wear prescription lenses. A release switch allows for removing the “Glass Pod” electronics part from the frame for use with safety glasses for the factory floor. EE also has faster WiFi, faster processing, extended battery life, an 8-megapixel camera (up from 5), and a (much-requested) red light to indicate recording is in process.

Using Glass with Augmedix, doctors and nurses at Dignity Health can focus on patient care rather than record keeping. (credit: X)

But uses are not limited to factories. EE exclusive distributor Glass Partners also offers Glass devices, specialized software solutions, and ongoing support for such applications as Augmedix, a documentation automation platform powered by human experts and software, which frees physicians from computer work (“Glass has “brought the joys of medicine back to my doctors,” says Albert Chan, M.D., Sutter Health), and swyMed, which gives medical care teams the ability to reliably connect to doctors for real-time telemedicine.

And there are even (carefully targeted) uses for non-workers: Aira provides instant access to information to blind and low-vision people.

A recent Forrester Research report predicts that by 2025, nearly 14.4 million U.S. workers will wear smart glasses.


sutterhealth | Smart Glass Transforms Doctor’s Office Visits, Improves Satisfaction

 

Neural stem cells steered by electric fields can repair brain damage

Electrical stimulation of the rat brain to move neural stem cells (credit: Jun-Feng Feng et al./ Stem Cell Reports)

Electric fields can be used to guide transplanted human neural stem cells — cells that can develop into various brain tissues — to repair brain damage in specific areas of the brain, scientists at the University of California, Davis have discovered.

It’s well known that electric fields can locally guide wound healing. Damaged tissues generate weak electric fields, and research by UC Davis Professor Min Zhao at the School of Medicine’s Institute for Regenerative Cures has previously shown how these electric fields can attract cells into wounds to heal them.

But the problem is that neural stem cells are naturally only found deep in the brain — in the hippocampus and the subventricular zone. To repair damage to the outer layers of the brain (the cortex), they would have to migrate a significant distance in the much larger human brain.

Migrating neural stem cells with electric fields. (Left) Transplanted human neural stem cells would normally be carried along by the the rostral migration stream (RMS) (red) toward the olfactory bulb (OB) (dark green, migration direction indicated by white arrow). (Right) But electrically guiding migration of the transplanted human neural stem cells reverses the flow toward the subventricular zone (bright green, migration direction indicated by red arrow). (credit: Jun-Feng Feng et al. (adapted by KurzweilAI/ StemCellReports)

Could electric fields be used to help the stem cells migrate that distance? To find out, the researchers placed human neural stem cells in the rostral migration stream (a pathway in the rat brain that carries cells toward the olfactory bulb, which governs the animal’s sense of smell). Cells move easily along this pathway because they are carried by the flow of cerebrospinal fluid, guided by chemical signals.

But by applying an electric field within the rat’s brain, the researchers found they could get the transplanted stem cells to reverse direction and swim “upstream” against the fluid flow. Once arrived, the transplanted stem cells stayed in their new locations weeks or months after treatment, and with indications of differentiation (forming into different types of neural cells).

“Electrical mobilization and guidance of stem cells in the brain provides a potential approach to facilitate stem cell therapies for brain diseases, stroke and injuries,” Zhao concluded.

But it will take future investigation to see if electrical stimulation can mobilize and guide migration of neural stem cells in diseased or injured human brains, the researchers note.

The research was published July 11 in the journal Stem Cell Reports.

Additional authors on the paper are at Ren Ji Hospital, Shanghai Jiao Tong University, and Shanghai Institute of Head Trauma in China and at Aaken Laboratories, Davis. The work was supported by the California Institute for Regenerative Medicine with additional support from NIH, NSF, and Research to Prevent Blindness Inc.


Abstract of Electrical Guidance of Human Stem Cells in the Rat Brain

Limited migration of neural stem cells in adult brain is a roadblock for the use of stem cell therapies to treat brain diseases and injuries. Here, we report a strategy that mobilizes and guides migration of stem cells in the brain in vivo. We developed a safe stimulation paradigm to deliver directional currents in the brain. Tracking cells expressing GFP demonstrated electrical mobilization and guidance of migration of human neural stem cells, even against co-existing intrinsic cues in the rostral migration stream. Transplanted cells were observed at 3 weeks and 4 months after stimulation in areas guided by the stimulation currents, and with indications of differentiation. Electrical stimulation thus may provide a potential approach to facilitate brain stem cell therapies.

Drinking coffee associated with lower risk of death from all causes, study finds

(credit: iStock)

People who drink around three cups of coffee a day may live longer than non-coffee drinkers, a landmark study has found.

The findings — published in the journal Annals of Internal Medicine — come from the largest study of its kind, in which scientists analyzed data from more than half a million people across 10 European countries to explore the effect of coffee consumption on risk of mortality.

Researchers from the International Agency for Research on Cancer (IARC) and Imperial College London found that higher levels of coffee consumption were associated with a reduced risk of death from all causes, particularly from circulatory diseases and diseases related to the digestive tract.

“We found that higher coffee consumption was associated with a lower risk of death from any cause, and specifically for circulatory diseases, and digestive diseases,” said lead author Marc Gunter of the IARC and formerly at Imperial’s School of Public Health. “Importantly, these results were similar across all of the 10 European countries, with variable coffee drinking habits and customs. Our study also offers important insights into the possible mechanisms for the beneficial health effects of coffee.”

Healthier livers, better glucose control

Using data from the EPIC study (European Prospective Investigation into Cancer and Nutrition), the group analysed data from 521,330 people from over the age of 35 from 10 EU countries, including the UK, France, Denmark and Italy. People’s diets were assessed using questionnaires and interviews, with the highest level of coffee consumption (by volume) reported in Denmark (900 mL per day) and lowest in Italy (approximately 92 mL per day). Those who drank more coffee were also more likely to be younger, to be smokers, drinkers, eat more meat and less fruit and vegetables.

After 16 years of follow up, almost 42,000 people in the study had died from a range of conditions including cancer, circulatory diseases, heart failure and stroke. Following careful statistical adjustments for lifestyle factors such as diet and smoking, the researchers found that the group with the highest consumption of coffee had a lower risk for all causes of death, compared to those who did not drink coffee.

They found that decaffeinated coffee had a similar effect.

In a subset of 14,000 people, they also analyzed metabolic biomarkers, and found that coffee drinkers may have healthier livers overall and better glucose control than non-coffee drinkers.

According to the group, more research is needed to find out which of the compounds in coffee may be giving a protective effect or potentially benefiting health.* Other avenues of research to explore could include intervention studies, looking at the effect of coffee drinking on health outcomes.

However, Gunter noted that “due to the limitations of observational research, we are not at the stage of recommending people to drink more or less coffee. That said, our results suggest that moderate coffee drinking is not detrimental to your health, and that incorporating coffee into your diet could have health benefits.”

The study was funded by the European Commission Directorate General for Health and Consumers and the IARC.

* Coffee contains a number of compounds that can interact with the body, including caffeine, diterpenes and antioxidants, and the ratios of these compounds can be affected by the variety of methods used to prepare coffee.


Abstract of Coffee Drinking and Mortality in 10 European Countries: A Multinational Cohort Study

Background: The relationship between coffee consumption and mortality in diverse European populations with variable coffee preparation methods is unclear.

Objective: To examine whether coffee consumption is associated with all-cause and cause-specific mortality.

Design: Prospective cohort study.

Setting: 10 European countries.

Participants: 521 330 persons enrolled in EPIC (European Prospective Investigation into Cancer and Nutrition).

Measurements: Hazard ratios (HRs) and 95% CIs estimated using multivariable Cox proportional hazards models. The association of coffee consumption with serum biomarkers of liver function, inflammation, and metabolic health was evaluated in the EPIC Biomarkers subcohort (n = 14 800).

Results: During a mean follow-up of 16.4 years, 41 693 deaths occurred. Compared with nonconsumers, participants in the highest quartile of coffee consumption had statistically significantly lower all-cause mortality (men: HR, 0.88 [95% CI, 0.82 to 0.95]; P for trend < 0.001; women: HR, 0.93 [CI, 0.87 to 0.98]; P for trend = 0.009). Inverse associations were also observed for digestive disease mortality for men (HR, 0.41 [CI, 0.32 to 0.54]; P for trend < 0.001) and women (HR, 0.60 [CI, 0.46 to 0.78]; P for trend < 0.001). Among women, there was a statistically significant inverse association of coffee drinking with circulatory disease mortality (HR, 0.78 [CI, 0.68 to 0.90]; P for trend < 0.001) and cerebrovascular disease mortality (HR, 0.70 [CI, 0.55 to 0.90]; P for trend = 0.002) and a positive association with ovarian cancer mortality (HR, 1.31 [CI, 1.07 to 1.61]; P for trend = 0.015). In the EPIC Biomarkers subcohort, higher coffee consumption was associated with lower serum alkaline phosphatase; alanine aminotransferase; aspartate aminotransferase; γ-glutamyltransferase; and, in women, C-reactive protein, lipoprotein(a), and glycated hemoglobin levels.

Limitations: Reverse causality may have biased the findings; however, results did not differ after exclusion of participants who died within 8 years of baseline. Coffee-drinking habits were assessed only once.

Conclusion:

Coffee drinking was associated with reduced risk for death from various causes. This relationship did not vary by country.

Primary Funding Source:

European Commission Directorate-General for Health and Consumers and International Agency for Research on Cancer.


Abstract of Association of Coffee Consumption With Total and Cause-Specific Mortality Among Nonwhite Populations

Background: Coffee consumption has been associated with reduced risk for death in prospective cohort studies; however, data in nonwhites are sparse.

Objective: To examine the association of coffee consumption with risk for total and cause-specific death.

Design: The MEC (Multiethnic Cohort), a prospective population-based cohort study established between 1993 and 1996.

Setting: Hawaii and Los Angeles, California.

Participants: 185 855 African Americans, Native Hawaiians, Japanese Americans, Latinos, and whites aged 45 to 75 years at recruitment.

Measurements: Outcomes were total and cause-specific mortality between 1993 and 2012. Coffee intake was assessed at baseline by means of a validated food-frequency questionnaire.

Results: 58 397 participants died during 3 195 484 person-years of follow-up (average follow-up, 16.2 years). Compared with drinking no coffee, coffee consumption was associated with lower total mortality after adjustment for smoking and other potential confounders (1 cup per day: hazard ratio [HR], 0.88 [95% CI, 0.85 to 0.91]; 2 to 3 cups per day: HR, 0.82 [CI, 0.79 to 0.86]; ≥4 cups per day: HR, 0.82 [CI, 0.78 to 0.87]; Pfor trend < 0.001). Trends were similar between caffeinated and decaffeinated coffee. Significant inverse associations were observed in 4 ethnic groups; the association in Native Hawaiians did not reach statistical significance. Inverse associations were also seen in never-smokers, younger participants (<55 years), and those who had not previously reported a chronic disease. Among examined end points, inverse associations were observed for deaths due to heart disease, cancer, respiratory disease, stroke, diabetes, and kidney disease.

Limitation: Unmeasured confounding and measurement error, although sensitivity analysis suggested that neither was likely to affect results.

Conclusion: Higher consumption of coffee was associated with lower risk for death in African Americans, Japanese Americans, Latinos, and whites.

Primary Funding Source: National Cancer Institute.

Projecting a visual image directly into the brain, bypassing the eyes

Brain-wide activity in a zebrafish when it sees and tries to pursue prey (credit: Ehud Isacoff lab/UC Berkeley)

Imagine replacing a damaged eye with a window directly into the brain — one that communicates with the visual part of the cerebral cortex by reading from a million individual neurons and simultaneously stimulating 1,000 of them with single-cell accuracy, allowing someone to see again.

That’s the goal of a $21.6 million DARPA award to the University of California, Berkeley (UC Berkeley), one of six organizations funded by DARPA’s Neural Engineering System Design program announced this week to develop implantable, biocompatible neural interfaces that can compensate for visual or hearing deficits.*

The UCB researchers ultimately hope to build a device for use in humans. But the researchers’ goal during the four-year funding period is more modest: to create a prototype to read and write to the brains of model organisms — allowing for neural activity and behavior to be monitored and controlled simultaneously. These organisms include zebrafish larvae, which are transparent, and mice, via a transparent window in the skull.


UC Berkeley | Brain activity as a zebrafish stalks its prey

“The ability to talk to the brain has the incredible potential to help compensate for neurological damage caused by degenerative diseases or injury,” said project leader Ehud Isacoff, a UC Berkeley professor of molecular and cell biology and director of the Helen Wills Neuroscience Institute. “By encoding perceptions into the human cortex, you could allow the blind to see or the paralyzed to feel touch.”

How to read/write the brain

To communicate with the brain, the team will first insert a gene into neurons that makes fluorescent proteins, which flash when a cell fires an action potential. This will be accompanied by a second gene that makes a light-activated “optogenetic” protein, which stimulates neurons in response to a pulse of light.

Peering into a mouse brain with a light field microscope to capture live neural activity of hundreds of individual neurons in a 3D section of tissue at video speed (30 Hz) (credit: The Rockefeller University)

To read, the team is developing a miniaturized “light field microscope.”** Mounted on a small window in the skull, it peers through the surface of the brain to visualize up to a million neurons at a time at different depths and monitor their activity.***

This microscope is based on the revolutionary “light field camera,” which captures light through an array of lenses and reconstructs images computationally in any focus.

A holographic projection created by a spatial light modulator would illuminate (“write”) one set of neurons at one depth — those patterned by the letter a, for example — and simultaneously illuminate other sets of neurons at other depths (z level) or in regions of the visual cortex, such as neurons with b or c patterns. That creates three-dimensional holograms that can light up hundreds of thousands of neurons at multiple depths, just under the cortical surface. (credit: Valentina Emiliani/University of Paris, Descartes)

The combined read-write function will eventually be used to directly encode perceptions into the human cortex — inputting a visual scene to enable a blind person to see. The goal is to eventually enable physicians to monitor and activate thousands to millions of individual human neurons using light.

Isacoff, who specializes in using optogenetics to study the brain’s architecture, can already successfully read from thousands of neurons in the brain of a larval zebrafish, using a large microscope that peers through the transparent skin of an immobilized fish, and simultaneously write to a similar number.

The team will also develop computational methods that identify the brain activity patterns associated with different sensory experiences, hoping to learn the rules well enough to generate “synthetic percepts” — meaning visual images representing things being touched — by a person with a missing hand, for example.

The brain team includes ten UC Berkeley faculty and researchers from Lawrence Berkeley National Laboratory, Argonne National Laboratory, and the University of Paris, Descartes.

* In future articles, KurzweilAI will cover the other research projects announced by DARPA’s Neural Engineering System Design program, which is part of the U.S. NIH Brain Initiative.

** Light penetrates only the first few hundred microns of the surface of the brain’s cortex, which is the outer wrapping of the brain responsible for high-order mental functions, such as thinking and memory but also interpreting input from our senses. This thin outer layer nevertheless contains cell layers that represent visual and touch sensations.


Jack Gallant | Movie reconstruction from human brain activity

Team member Jack Gallant, a UC Berkeley professor of psychology, has shown that its possible to interpret what someone is seeing solely from measured neural activity in the visual cortex.

*** Developed by another collaborator, Valentina Emiliani at the University of Paris, Descartes, the light-field microscope and spatial light modulator will be shrunk to fit inside a cube one centimeter, or two-fifths of an inch, on a side to allow for being carried comfortably on the skull. During the next four years, team members will miniaturize the microscope, taking advantage of compressed light field microscopy developed by Ren Ng to take images with a flat sheet of lenses that allows focusing at all depths through a material. Several years ago, Ng, now a UC Berkeley assistant professor of electrical engineering and computer sciences, invented the light field camera.