Elon Musk wants to enhance us as superhuman cyborgs to deal with superintelligent AI

(credit: Neuralink Corp.)

It’s the year 2021. A quadriplegic patient has just had one million “neural lace” microparticles injected into her brain, the world’s first human with an internet communication system using a wireless implanted brain-mind interface — and empowering her as the first superhuman cyborg. …

No, this is not a science-fiction movie plot. It’s the actual first public step — just four years from now — in Tesla CEO Elon Musk’s business plan for his latest new venture, Neuralink. It’s now explained for the first time on Tim Urban’s WaitButWhy blog.

Dealing with the superintelligence existential risk

Such a system would allow for radically improved communication between people, Musk believes. But for Musk, the big concern is AI safety. “AI is obviously going to surpass human intelligence by a lot,” he says. “There’s some risk at that point that something bad happens, something that we can’t control, that humanity can’t control after that point — either a small group of people monopolize AI power, or the AI goes rogue, or something like that.”

“This is what keeps Elon up at night,” says Urban. “He sees it as only a matter of time before superintelligent AI rises up on this planet — and when that happens, he believes that it’s critical that we don’t end up as part of ‘everyone else.’ That’s why, in a future world made up of AI and everyone else, he thinks we have only one good option: To be AI.”

Neural dust: an ultrasonic, low power solution for chronic brain-machine interfaces (credit: Swarm Lab/UC Berkeley)

To achieve his, Neuralink CEO Musk has met with more than 1,000 people, narrowing it down initially to eight experts, such as Paul Merolla, who spent the last seven years as the lead chip designer at IBM on their DARPA-funded SyNAPSE program to design neuromorphic (brain-inspired) chips with 5.4 billion transistors (each with 1 million neurons and 256 million synapses), and Dongjin (DJ) Seo, who while at UC Berkeley designed an ultrasonic backscatter system for powering and communicating with implanted bioelectronics called neural dust for recording brain activity.*

Mesh electronics being injected through sub-100 micrometer inner diameter glass needle into aqueous solution (credit: Lieber Research Group, Harvard University)

Becoming one with AI — a good thing?

Neuralink’s goal its to create a “digital tertiary layer” to augment the brain’s current cortex and limbic layers — a radical high-bandwidth, long-lasting, biocompatible, bidirectional communicative, non-invasively implanted system made up of micron-size (millionth of a meter) particles communicating wirelessly via the cloud and internet to achieve super-fast communication speed and increased bandwidth (carrying more information).

“We’re going to have the choice of either being left behind and being effectively useless or like a pet — you know, like a house cat or something — or eventually figuring out some way to be symbiotic and merge with AI. … A house cat’s a good outcome, by the way.”

Thin, flexible electrodes mounted on top of a biodegradable silk substrate could provide a better brain-machine interface, as shown in this model. (credit: University of Illinois at Urbana-Champaign)

But machine intelligence is already vastly superior to human intelligence in specific areas (such as Google’s Alpha Go) and often inexplicable. So how do we know superintelligence has the best interests of humanity in mind?

“Just an engineering problem”

Musk’s answer: “If we achieve tight symbiosis, the AI wouldn’t be ‘other’  — it would be you and with a relationship to your cortex analogous to the relationship your cortex has with your limbic system.” OK, but then how does an inferior intelligence know when it’s achieved full symbiosis with a superior one — or when AI goes rogue?

Brain-to-brain (B2B) internet communication system: EEG signals representing two words were encoded into binary strings (left) by the sender (emitter) and sent via the internet to a receiver. The signal was then encoded as a series of transcranial magnetic stimulation-generated phosphenes detected by the visual occipital cortex, which the receiver then translated to words (credit: Carles Grau et al./PLoS ONE)

And what about experts in neuroethics, psychology, law? Musk says it’s just “an engineering problem. … If we can just use engineering to get neurons to talk to computers, we’ll have done our job, and machine learning can do much of the rest.”

However, it’s not clear how we could be assured our brains aren’t hacked, spied on, and controlled by a repressive government or by other humans — especially those with a more recently updated software version or covert cyborg hardware improvements.

NIRS/EEG brain-computer interface system using non-invasive near-infrared light for sensing “yes” or “no” thoughts, shown on a model (credit: Wyss Center for Bio and Neuroengineering)

In addition, the devices mentioned in WaitButWhy all require some form of neurosurgery, unlike Facebook’s research project to use non-invasive near-infrared light, as shown in this experiment, for example.** And getting implants for non-medical use approved by the FDA will be a challenge, to grossly understate it.

“I think we are about 8 to 10 years away from this being usable by people with no disability,” says Musk, optimistically. However, Musk does not lay out a technology roadmap for going further, as MIT Technology Review notes.

Nonetheless, Neuralink sounds awesome — it should lead to some exciting neuroscience breakthroughs. And Neuralink now has 16 San Francisco job listings here.

* Other experts: Vanessa Tolosa, Lawrence Livermore National Laboratory, one of the world’s foremost researchers on biocompatible materials; Max Hodak, who worked on the development of some groundbreaking BMI technology at Miguel Nicolelis’s lab at Duke University, Ben Rapoport, Neuralink’s neurosurgery expert, with a Ph.D. in Electrical Engineering and Computer Science from MIT; Tim Hanson, UC Berkeley post-doc and expert in flexible Electrodes for Stable, Minimally-Invasive Neural Recording; Flip Sabes, professor, UCSF School of Medicine expert in cortical physiology, computational and theoretical modeling, and human psychophysics and physiology; and Tim Gardner, Associate Professor of Biology at Boston University, whose lab works on implanting BMIs in birds, to study “how complex songs are assembled from elementary neural units” and learn about “the relationships between patterns of neural activity on different time scales.”

** This binary experiment and the binary Brain-to-brain (B2B) internet communication system mentioned above are the equivalents of the first binary (dot–dash) telegraph message, sent May 24, 1844: ”What hath God wrought?”

Carnegie Mellon University AI beats top Chinese poker players

Carnegie Mellon University professor Tuomas Sandholm talks to Kai-Fu Lee, head of Sinovation Ventures, a Chinese venture capital firm, as Lee plays poker against Lengpudashi AI (credit: Sinovation Ventures)

Artificial intelligence (AI) triumphed over human poker players again (see “Carnegie Mellon AI beats top poker pros — a first“), as a computer program developed by Carnegie Mellon University (CMU) researchers beat six Chinese players by a total of $792,327 in virtual chips during a five-day, 36,000-hand exhibition that ended today (April 10, 2017) in Hainan, China.

The AI software program, called Lengpudashi (“cold poker master”) is a version of Libratus, the CMU AI that beat four top poker professionals during a 20-day, 120,000-hand Heads-Up No-Limit Texas Hold’em competition in January in Pittsburgh, Pennsylvania.

Strategic Machine Inc.*, a company founded by Tuomas Sandholm, professor of computer science and co-creator of Libratus/Lengpudashi with Noam Brown, a Ph.D. student in computer science, will take home a pot worth approximately $290,000.

Results of the tournament pitting Lengpudashi AI against four top poker professionals (credit: Sinovation Ventures)

The human players, called Team Dragons, were led by Alan Du, a Shanghai venture capitalist who won a 2016 World Series of Poker bracelet.

The exhibition was organized by Kai-Fu Lee, a CMU alumnus and former faculty member who is CEO of Sinovation Ventures, an early-stage venture capital firm that invests in startups in China and the United States. He is a former executive of Apple, Microsoft and Google, and is one of the most prominent figures in China’s internet sector.

* Strategic Machine has exclusively licensed Libratus and other technologies from Sandholm’s CMU laboratory. Strategic Machine targets a broad set of applications: poker and other recreational games, business strategy, negotiation, cybersecurity, physical security, military applications, strategic pricing, finance, auctions, political campaigns, and medical treatment planning.

Alpha Go to take on world’s number one Go player in China

The world’s number one Go player, Ke Jie (far right) and associates have recreated the opening moves of one of AlphaGo’s games with Lee Sedol from memory to explain the beauty of its moves to Google CEO Sundar Pichai (second from left) during a visit Pichai made to Nie Weiping’s Go school in Beijing last year (credit: DeepMind)

DeepMind’s Alpha Go AI software will take on China’s top Go players in “The Future of Go Summit” — a five-day festival of Go and artificial intelligence in the game’s birthplace, China, on May 23–27, DeepMind Co-Founder & CEO Demis Hassabis announced today (April 10, 2017).

The summit will feature a variety of game formats involving AlphaGo and top Chinese players, specifically designed to explore the mysteries of the game together, but “the centerpiece of the event will be a classic 1:1 match of three games between AlphaGo and the world’s number one player, Ke Jie, to push AlphaGo to its limits,” Hassabis said.

The festival will also include a forum on the “Future of A.I.” in which leading experts from Google and China will explore “how AlphaGo has created new knowledge about the oldest of games, and how the technologies behind AlphaGo, machine learning, and artificial intelligence are bringing solutions to some of the world’s greatest challenges into reach.”

In January 2016, the AlphaGo deep learning computer system was the first computer program to defeat a Go champion, Korean Lee Sudow, shocking many observers of the game and marking a major breakthrough for AI.

DeepMind was founded in London in 2010 and backed by successful tech entrepreneurs. Having been acquired by Google in 2014, it is now part of the Alphabet group.

Do-it-yourself robotics kit gives science, tech, engineering, math students tools to automate biology and chemistry experiments

Bioengineers combined a Lego Mindstorms system (left) with a motorized pipette (center) for dropping fluids, allowing for simple experiments like showing how liquids of different salt densities can be layered. (credit: Riedel-Kruse Lab)

Stanford bioengineers have developed liquid-handling robots to allow students to modify and create their own robotic systems that can transfer precise amounts of fluids between flasks, test tubes, and experimental dishes.

The bioengineers combined a Lego Mindstorms robotics kit with a cheap and easy-to-find plastic syringe to create robots that approach the performance of the far more costly automation systems found at universities and biotech labs.

Step-by-step DIY plans

Children 10–13 years old built and explored the functionality of these robots by performing experiments (credit: Lukas C. Gerber et al./PloS Biology)

The idea is to enable students to learn the basics of robotics and the wet sciences in an integrated way. Students learn STEM skills like mechanical engineering, computer programming, and collaboration while gaining a deeper appreciation of the value of robots in life-sciences experiments.

“We really want kids to learn by doing,” said Ingmar Riedel-Kruse, assistant professor of bioengineering and a member of Stanford Bio-X, who led the team. “We show that with a few relatively inexpensive parts, a little training and some imagination, students can create their own liquid-handling robots and then run experiments on it — so they learn about engineering, coding, and the wet sciences at the same time.”

In an open-access paper in the journal PLoS Biology and on Riedel-Kruse’s lab website, the team offers step-by-step building plans and several fundamental experiments targeted to elementary, middle and high school students. They also offer experiments that students can conduct using common household consumables like food coloring, yeast or sugar.

In one experiment, colored liquids with distinct salt concentrations are layered atop one another to teach about liquid density. Other tests measure whether liquids are acids like vinegar or bases like baking soda, or which sugar concentration is best for yeast.

Funding was provided by grants from the National Science Foundation (Cyberlearning and National Robotics Initiative).


Stanford University School of Engineering | SFENG Robots Riedel Kruse v4


Abstract of Liquid-handling Lego robots and experiments for STEM education and research

Liquid-handling robots have many applications for biotechnology and the life sciences, with increasing impact on everyday life. While playful robotics such as Lego Mindstorms significantly support education initiatives in mechatronics and programming, equivalent connections to the life sciences do not currently exist. To close this gap, we developed Lego-based pipetting robots that reliably handle liquid volumes from 1 ml down to the sub-μl range and that operate on standard laboratory plasticware, such as cuvettes and multiwell plates. These robots can support a range of science and chemistry experiments for education and even research. Using standard, low-cost household consumables, programming pipetting routines, and modifying robot designs, we enabled a rich activity space. We successfully tested these activities in afterschool settings with elementary, middle, and high school students. The simplest robot can be directly built from the widely used Lego Education EV3 core set alone, and this publication includes building and experiment instructions to set the stage for dissemination and further development in education and research.

How to control robots with your mind

The robot is informed that its initial motion was incorrect based upon real-time decoding of the observer’s EEG signals, and it corrects its selection accordingly to properly sort an object (credit: Andres F. Salazar-Gomez et al./MIT, Boston University)

Two research teams are developing new ways to communicate with robots and shape them one day into the kind of productive workers featured in the current AMC TV show HUMANS (now in second season).

Programming robots to function in a real-world environment is normally a complex process. But now a team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Boston University is creating a system that lets people correct robot mistakes instantly by simply thinking.

In the initial experiment, the system uses data from an electroencephalography (EEG) helmet to correct robot performance on an object-sorting task. Novel machine-learning algorithms enable the system to classify brain waves within 10 to 30 milliseconds.

The system includes a main experiment controller, a Baxter robot, and an EEG acquisition and classification system. The goal is to make the robot pick up the cup that the experimenter is thinking about. An Arduino computer (bottom) relays messages between the EEG system and robot controller. A mechanical contact switch (yellow) detects robot arm motion initiation. (credit: Andres F. Salazar-Gomez et al./MIT, Boston University)

While the system currently handles relatively simple binary-choice activities, we may be able one day to control robots in much more intuitive ways. “Imagine being able to instantaneously tell a robot to do a certain action, without needing to type a command, push a button, or even say a word,” says CSAIL Director Daniela Rus. “A streamlined approach like that would improve our abilities to supervise factory robots, driverless cars, and other technologies we haven’t even invented yet.”

The team used a humanoid robot named “Baxter” from Rethink Robotics, the company led by former CSAIL director and iRobot co-founder Rodney Brooks.


MITCSAIL | Brain-controlled Robots

Intuitive human-robot interaction

The system detects brain signals called “error-related potentials” (generated whenever our brains notice a mistake) to determine if the human agrees with a robot’s decision.

“As you watch the robot, all you have to do is mentally agree or disagree with what it is doing,” says Rus. “You don’t have to train yourself to think in a certain way — the machine adapts to you, and not the other way around.” Or if the robot’s not sure about its decision, it can trigger a human response to get a more accurate answer.

The team believes that future systems could extend to more complex multiple-choice tasks. The system could even be useful for people who can’t communicate verbally: the robot could be controlled via a series of several discrete binary choices, similar to how paralyzed locked-in patients spell out words with their minds.

The project was funded in part by Boeing and the National Science Foundation. An open-access paper will be presented at the IEEE International Conference on Robotics and Automation (ICRA) conference in Singapore this May.

Here, robot, Fetch!

Robot asks questions, and based on a person’s language and gesture, infers what item to deliver. (credit: David Whitney/Brown University)

But what if the robot is still confused? Researchers in Brown University’s Humans to Robots Lab have an app for that.

“Fetching objects is an important task that we want collaborative robots to be able to do,” said computer science professor Stefanie Tellex. “But it’s easy for the robot to make errors, either by misunderstanding what we want, or by being in situations where commands are ambiguous. So what we wanted to do here was come up with a way for the robot to ask a question when it’s not sure.”

Tellex’s lab previously developed an algorithm that enables robots to receive speech commands as well as information from human gestures. But it ran into problems when there were lots of very similar objects in close proximity to each other. For example, on the table above, simply asking for “a marker” isn’t specific enough, and it might not be clear which one a person is pointing to if a number of markers are clustered close together.

“What we want in these situations is for the robot to be able to signal that it’s confused and ask a question rather than just fetching the wrong object,” Tellex explained.

The new algorithm does just that, enabling the robot to quantify how certain it is that it knows what a user wants. When its certainty is high, the robot will simply hand over the object as requested. When it’s not so certain, the robot makes its best guess about what the person wants, then asks for confirmation by hovering its gripper over the object and asking, “this one?”


David Whitney | Reducing Errors in Object-Fetching Interactions through Social Feedback

One of the important features of the system is that the robot doesn’t ask questions with every interaction; it asks intelligently.

And even though the system asks only a very simple question, it’s able to make important inferences based on the answer. For example, say a user asks for a marker and there are two markers on a table. If the user tells the robot that its first guess was wrong, the algorithm deduces that the other marker must be the one that the user wants, and will hand that one over without asking another question. Those kinds of inferences, known as “implicatures,” make the algorithm more efficient.

In future work, Tellex and her team would like to combine the algorithm with more robust speech recognition systems, which might further increase the system’s accuracy and speed. “Currently we do not consider the parse of the human’s speech. We would like the model to understand prepositional phrases (‘on the left,’ ‘nearest to me’). This would allow the robot to understand how items are spatially related to other items through language.”

Ultimately, Tellex hopes, systems like this will help robots become useful collaborators both at home and at work.

An open-access paper on the DARPA-funded research will also be presented at the International Conference on Robotics and Automation.


Abstract of Correcting Robot Mistakes in Real Time Using EEG Signals

Communication with a robot using brain activity from a human collaborator could provide a direct and fast
feedback loop that is easy and natural for the human, thereby enabling a wide variety of intuitive interaction tasks. This paper explores the application of EEG-measured error-related potentials (ErrPs) to closed-loop robotic control. ErrP signals are particularly useful for robotics tasks because they are naturally occurring within the brain in response to an unexpected error. We decode ErrP signals from a human operator in real time to control a Rethink Robotics Baxter robot during a binary object selection task. We also show that utilizing a secondary interactive error-related potential signal generated during this closed-loop robot task can greatly improve classification performance, suggesting new ways in which robots can acquire human feedback. The design and implementation of the complete system is described, and results are presented for realtime
closed-loop and open-loop experiments as well as offline analysis of both primary and secondary ErrP signals. These experiments are performed using general population subjects that have not been trained or screened. This work thereby demonstrates the potential for EEG-based feedback methods to facilitate seamless robotic control, and moves closer towards the goal of real-time intuitive interaction.


Abstract of Reducing Errors in Object-Fetching Interactions through Social Feedback

Fetching items is an important problem for a social robot. It requires a robot to interpret a person’s language and gesture and use these noisy observations to infer what item to deliver. If the robot could ask questions, it would help the robot be faster and more accurate in its task. Existing approaches either do not ask questions, or rely on fixed question-asking policies. To address this problem, we propose a model that makes assumptions about cooperation between agents to perform richer signal extraction from observations. This work defines a mathematical framework for an itemfetching domain that allows a robot to increase the speed and accuracy of its ability to interpret a person’s requests by reasoning about its own uncertainty as well as processing implicit information (implicatures). We formalize the itemdelivery domain as a Partially Observable Markov Decision Process (POMDP), and approximately solve this POMDP in real time. Our model improves speed and accuracy of fetching tasks by asking relevant clarifying questions only when necessary. To measure our model’s improvements, we conducted a real world user study with 16 participants. Our method achieved greater accuracy and a faster interaction time compared to state-of-theart baselines. Our model is 2.17 seconds faster (25% faster) than a state-of-the-art baseline, while being 2.1% more accurate.

Programmable shape-shifting molecular robots respond to DNA signals

Japanese researchers have developed an amoeba-like shape-changing molecular robot — assembled from biomolecules such as DNA, proteins, and lipids — that could act as a programmable and controllable robot for treating live culturing cells or monitoring environmental pollution, for example.

This the first time a molecular robotic system can recognize signals and control its shape-changing function, and their molecular robots could in the near future function in a way similar to living organisms, according to the researchers.

Developed by a research group at Tohoku University and Japan Advanced Institute of Science and Technology, the molecular robot integrates molecular machines within an artificial cell membrane and is about one micrometer in diameter — similar in size to human cells. It can start and stop its shape-changing function in response to a specific DNA signal.

Schematic diagram of the molecular robot. (A) In response to a start-stop DNA signal, molecular actuators (microtubules) inside the robot change the shape of the artificial cell membrane (liposome), controlled by a “molecular clutch” that transmits the force from the actuator (kinesin proteins, shown in green, assemble DNA to the cell membrane when activated). (B) Microscopy images of molecular robots. When the input DNA signal is “stop,” the clutch is turned “OFF,” deactivating the shape-changing behavior. The shape-changing is activated when the the clutch is turned “ON.” Scale bar: 20 μm. The white arrow indicates the molecular actuator part that transforms the shape of the membrane. (credit: Yusuke Sato)

The movement force is generated by molecular actuators (microtubules) controlled by a molecular clutch (composed of DNA and kinesin — a “walker” that carries molecules along microtubules in the body). The shape of the robot’s body (artificial cell membrane, or liposome — a vesicle made from a lipid bilayer) is changed (from static to active) by the actuator, triggered by specific DNA signals activated by UV irradiation.

Kinesin motor protein “walking” along microtubule filament (credit: Jzp706/CC)

The realization of a molecular robot whose components are designed at a molecular level and that can function in a small and complicated environment, such as the human body, is expected to significantly expand the possibilities of robotics engineering, according to the researchers.*

“With more than 20 chemicals at varying concentrations, it took us a year and a half to establish good conditions for working our molecular robots,” says Associate Professor Shin-ichiro Nomura at Tohoku University’s Graduate School of Engineering, who led the study. “It was exciting to see the robot shape-changing motion through the microscope. It meant our designed DNA clutch worked perfectly, despite the complex conditions inside the robot.”

Programmable by DNA computing devices

The research results were published in an open-access paper in Science Robotics on March 1, 2017.

The authors say that “combining other molecular devices would lead to the realization of a molecular robot with advanced functions. For example, artificial nanopores, such as an artificial channel composed of DNA, could be used to sense signal molecules in the surrounding environments through the channel.

“In addition, the behavior of a molecular robot could be programmed by DNA computing devices, such as judging the condition of environments. These implementations could allow for the development of molecular robots capable of chemotaxis [movement in a direction corresponding to a gradient of increasing or decreasing concentration of a particular substance], [similar to] white blood cells, and beyond.”

The research was supported by the JSPS KAKENHI, AMED-CREST and Tohoku University-DIARE.

* In the current design, “there are still limitations in the functions of the robot. For example, the switching of robot behavior is not reversible. The shape change is not directional and as yet not possible for complex tasks, for example, locomotion. However, to the best of our knowledge, this is the first implementation of a molecular robot that can control its shape-changing behavior in response to specific signal molecules.” — Yusuke Sato et al./Science Robotics


Abstract of Micrometer-sized molecular robot changes its shape in response to signal molecules

Rapid progress in nanoscale bioengineering has allowed for the design of biomolecular devices that act as sensors, actuators, and even logic circuits. Realization of micrometer-sized robots assembled from these components is one of the ultimate goals of bioinspired robotics. We constructed an amoeba-like molecular robot that can express continuous shape change in response to specific signal molecules. The robot is composed of a body, an actuator, and an actuator-controlling device (clutch). The body is a vesicle made from a lipid bilayer, and the actuator consists of proteins, kinesin, and microtubules. We made the clutch using designed DNA molecules. It transmits the force generated by the motor to the membrane, in response to a signal molecule composed of another sequence-designed DNA with chemical modifications. When the clutch was engaged, the robot exhibited continuous shape change. After the robot was illuminated with light to trigger the release of the signal molecule, the clutch was disengaged, and consequently, the shape-changing behavior was successfully terminated. In addition, the reverse process—that is, initiation of shape change by input of a signal—was also demonstrated. These results show that the components of the robot were consistently integrated into a functional system. We expect that this study can provide a platform to build increasingly complex and functional molecular systems with controllable motility.

Neural networks promise sharpest-ever telescope images

From left to right: an example of an original galaxy image; the same image deliberately degraded; the image after recovery by the neural network; and for comparison, deconvolution. This figure visually illustrates the neural-networks’s ability to recover features that conventional deconvolutions cannot. (credit: K. Schawinski / C. Zhang / ETH Zurich)

Swiss researchers are using neural networks to achieve the sharpest-ever images in optical astronomy. The work appears in an open-access paper in Monthly Notices of the Royal Astronomical Society.

The aperture (diameter) of any telescope is fundamentally limited by its lens or mirror. The bigger the mirror or lens, the more light it gathers, allowing astronomers to detect fainter objects, and to observe them more clearly. Other factors affecting image quality are noise and atmospheric distortion.

The Swiss study uses “generative adversarial network” (GAN) machine-learning technology (see this KurzweilAI article) to go beyond this limit by using two neural networks that compete with each other to create a series of more realistic images. The researchers first train the neural network to “see” what galaxies look like (using blurred and sharp images of the same galaxy), and then ask it to automatically fix the blurred images of a galaxy, converting them to sharp ones.

Schematic illustration of the neural-network training process. The input is a set of original images. From these, the researchers automatically generate degraded images, and train a GAN. In the testing phase, only the generator will be used to recover images. (credit: K. Schawinski / C. Zhang / ETH Zurich)

The trained neural networks were able to recognize and reconstruct features that the telescope could not resolve, such as star-forming regions and dust lanes in galaxies. The scientists checked the reconstructed images against the original high-resolution images to test its performance, finding it better able to recover features than anything used to date.

“We can start by going back to sky surveys made with telescopes over many years, see more detail than ever before, and, for example, learn more about the structure of galaxies,” said lead author Prof. Kevin Schawinski of ETH Zurich in Switzerland. “There is no reason why we can’t then apply this technique to the deepest images from Hubble, and the coming James Webb Space Telescope, to learn more about the earliest structures in the Universe.”

ETH Zurich is hosting this work on the space.ml cross-disciplinary astrophysics/computer-science initiative, where the code is available to the general public.


Abstract of Generative adversarial networks recover features in astrophysical images of galaxies beyond the deconvolution limit

Observations of astrophysical objects such as galaxies are limited by various sources of random and systematic noise from the sky background, the optical system of the telescope and the detector used to record the data. Conventional deconvolution techniques are limited in their ability to recover features in imaging data by the Shannon–Nyquist sampling theorem. Here, we train a generative adversarial network (GAN) on a sample of 4550 images of nearby galaxies at 0.01 < z < 0.02 from the Sloan Digital Sky Survey and conduct 10× cross-validation to evaluate the results. We present a method using a GAN trained on galaxy images that can recover features from artificially degraded images with worse seeing and higher noise than the original with a performance that far exceeds simple deconvolution. The ability to better recover detailed features such as galaxy morphology from low signal to noise and low angular resolution imaging data significantly increases our ability to study existing data sets of astrophysical objects as well as future observations with observatories such as the Large Synoptic Sky Telescope (LSST) and the Hubble and James Webb space telescopes.

How to build your own bio-bot

Bio-bot design inspired by the muscle-tendon-bone complex found in the human body, with 3D-printed flexible skeleton. Optical stimulation of the muscle tissue (orange), which is genetically engineered to contract in response to blue light, makes the bio-bot walk across a surface in the direction of the light. (credit: Ritu Raman et al./Nature Protocols)

For the past several years, researchers at the University of Illinois at Urbana-Champaign have reverse-engineered native biological tissues and organs — creating tiny walking “bio-bots” powered by muscle cells and controlled with electrical and optical pulses.

Now, in an open-access cover paper in Nature Protocols, the researchers are sharing a protocol with engineering details for their current generation of millimeter-scale soft robotic bio-bots*.

Using 3D-printed skeletons, these devices would be coupled to tissue-engineered skeletal muscle actuators to drive locomotion across 2D surfaces, and could one day be used for studies of muscle development and disease, high-throughput drug testing, and dynamic implants, among other applications.

In a new design, the researchers worked with MIT optogenetics experts to genetically engineer a light-responsive skeletal muscle cell line that could be stimulated to contract by pulses of blue light. (credit: Ritu Raman et al./Nature Protocols)

The future of bio-bots

The researchers envision future generations of bio-bots as biological building blocks that lead to the machines of the future. The bio-bots would integrate multiple cell and tissue types, including neuronal networks for sensing and processing, and vascular networks for delivery of nutrients and other biochemical factors. They might also have some of the higher-order properties of biological materials, such as self-organization and self-healing.

“These next iterations of biohybrid machines could, for example, be designed to sense chemical toxins, locomote toward them, and neutralize them through cell-secreted factors. Such a functionality could have broad relevance in medical diagnostics and targeted therapeutics in vivo, or even be extended to environmental use as a method of cleaning pathogens from public water supplies,” the research note in the paper.

“This protocol is essentially intended to be a one-stop reference for any scientist around the world who wants to replicate the results we showed in our PNAS 2016 and PNAS 2014 papers, and give them a framework for building their own bio-bots for a variety of applications,” said Bioengineering Professor Rashid Bashir**, who heads the bio-bots research group.

Bashir’s group has been a pioneer in designing and building bio-bots, less than a centimeter in size, made of flexible 3D printed hydrogels and living cells. In 2012, the group demonstrated bio-bots that could “walk” on their own, powered by beating heart cells from rats. In 2014, they switched to muscle cells controlled with electrical pulses, giving researchers unprecedented command over their function.

* Not to be confused with swimming biobots and rescue biobots using remotely controlled cockroaches.

** Bashir is also Grainger Distinguished Chair in Engineering and head of the Department of Bioengineering. Work on the bio-bots was conducted at the Micro + Nanotechnology Lab at Illinois.


NewsAtIllinois | Light illuminates the way for bio-bots


Abstract of A modular approach to the design, fabrication, and characterization of muscle-powered biological machines

Biological machines consisting of cells and biomaterials have the potential to dynamically sense, process, respond, and adapt to environmental signals in real time. As a first step toward the realization of such machines, which will require biological actuators that can generate force and perform mechanical work, we have developed a method of manufacturing modular skeletal muscle actuators that can generate up to 1.7 mN (3.2 kPa) of passive tension force and 300 μN (0.56 kPa) of active tension force in response to external stimulation. Such millimeter-scale biological actuators can be coupled to a wide variety of 3D-printed skeletons to power complex output behaviors such as controllable locomotion. This article provides a comprehensive protocol for forward engineering of biological actuators and 3D-printed skeletons for any design application. 3D printing of the injection molds and skeletons requires 3 h, seeding the muscle actuators takes 2 h, and differentiating the muscle takes 7 d.

New machine-learning algorithms may revolutionize drug discovery — and our understanding of life

A new set of machine-learning algorithms can generate 3D structures of complex nanoscale protein molecules like this complex proteasome map refined to 2.8 Angstroms (.28 nanometer) in 70 min with 49,954 particle images (credit: Structura Biotechnology Inc.)

A new set of machine-learning algorithms developed by researchers at the University of Toronto Scarborough can generate 3D structures of nanoscale protein molecules that could not be achieved in the past. The algorithms may revolutionize the development of new drug therapies for a range of diseases and may even lead to better understand how life works at the atomic level, the researchers say.

Drugs work by binding to a specific protein molecule and changing the protein’s 3D shape, which alters the way the drug works once inside the body. The ideal drug is designed in a shape that will only bind to a specific protein or group of proteins that are involved in a disease, while eliminating side effects that occur when drugs bind to other proteins in the body.

A significant computational problem

Since proteins are tiny — about 1 to 100 nanometers — even smaller than the shortest wavelength of visible light, they can’t be seen directly without using sophisticated techniques like electron cryomicroscopy (cryo-EM). Cryo-EM uses high-power microscopes to take tens of thousands of low-resolution images of a frozen protein sample from different positions.

The computational problem is to then piece together the correct high-resolution 3D structure from these 2D images.

Existing techniques take several days or even weeks to generate a 3D structure on a cluster of computers, requiring as much as 500,000 CPU hours, according to the researchers. Also, existing techniques often generate incorrect structures unless an expert user provides an accurate guess of the molecule being studied.

CryoSPARC machine learning algorithms can generate 3-D structures of nanoscale protein molecules (credit: Structura Biotechnology Inc)

New high-speed, deep-learning algorithms

That’s where the new set of algorithms* comes in. It reconstructs 3D structures of protein molecules using these images. “Our approach solves some of the major problems in terms of speed and number of structures you can determine,” says Professor David Fleet, chair of the Computer and Mathematical Sciences Department at U of Toronto Scarborough.

The algorithms could significantly aid in the development of new drugs because they provide a faster, more efficient means at arriving at the correct protein structure.

The new approach, called cryoSPARC, developed by the team’s startup, Structura Biotechnology Inc., eliminates the need for that prior knowledge and can make the computations possible in minutes on a single computer, using a standalone graphics processing unit (GPU) accelerated software package, according to the researchers.

The research was published in the current edition of the journal Nature Methods. It received funding from the Natural Sciences and Engineering Research Council of Canada (NSERC). The new cryo-EM platform is already being used in labs across North America, the researchers note.

* “We use an SGD [stochastic gradient descent] optimization scheme to quickly identify one or several low-resolution 3D structures that are consistent with a set of observed images. This algorithm allows for ab initio heterogeneous structure determination with no prior model of the molecule’s structure. Once approximate structures are determined, a branch-and-bound algorithm for image alignment helps rapidly refine structures to high resolution. The speed and robustness of these approaches allow structure determination in a matter of minutes or hours on a single inexpensive desktop workstation. … SGD was popularized as a key tool in deep learning for the optimization of nonconvex functions, and it results in near human-level performance in tasks like image and speech recognition.” — Ali Punjani et al./Nature Methods

University of Toronto Scarborough | New algorithms may revolutionize drug discoveries and our understanding of life


Abstract of cryoSPARC: algorithms for rapid unsupervised cryo-EM structure determination

Single-particle electron cryomicroscopy (cryo-EM) is a powerful method for determining the structures of biological macromolecules. With automated microscopes, cryo-EM data can often be obtained in a few days. However, processing cryo-EM image data to reveal heterogeneity in the protein structure and to refine 3D maps to high resolution frequently becomes a severe bottleneck, requiring expert intervention, prior structural knowledge, and weeks of calculations on expensive computer clusters. Here we show that stochastic gradient descent (SGD) and branch-and-bound maximum likelihood optimization algorithms permit the major steps in cryo-EM structure determination to be performed in hours or minutes on an inexpensive desktop computer. Furthermore, SGD with Bayesian marginalization allows ab initio 3D classification, enabling automated analysis and discovery of unexpected structures without bias from a reference map. These algorithms are combined in a user-friendly computer program named cryoSPARC

Beneficial AI conference develops ‘Asilomar AI principles’ to guide future AI research

Beneficial AI conference (credit: Future of Life Institute)

At the Beneficial AI 2017 conference, January 5–8 held at a conference center in Asilomar, California — a sequel to the 2015 AI Safety conference in Puerto Rico — the Future of Life Institute (FLI) brought together more 100 AI researchers from academia and industry and thought leaders in economics, law, ethics, and philosophy to address and formulate principles of beneficial AI.

FLI hosted a two-day workshop for its grant recipients, followed by a 2.5-day conference, in which people from various AI-related fields hashed out opportunities and challenges related to the future of AI and steps we can take to ensure that the resulting technology is beneficial.

Beneficial AI conference participants (credit: Future of Life Institute)

The result was 23 Asilomar AI Principles, intended to suggest AI research guidelines, such as “The goal of AI research should be to create not undirected intelligence, but beneficial intelligence” and “An arms race in lethal autonomous weapons should be avoided”; identify ethics and values, such as safety and transparency; and address longer-term issues — notably, “ Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.”

To date, 2515 AI researchers and others are signatories of the Principles. The process is described here.

The conference location has historic significance. In 2009, the Association for the Advancement of Artificial Intelligence held the Asilomar Meeting on Long-Term AI Futures to address similar concerns. And in 1975, the Asilomar Conference on Recombinant DNA was held to discuss potential biohazards and regulation of emerging biotechnology.

The non-profit Future of Life Institute was founded in March 2014 by MIT cosmologist Max Tegmark, Skype co-founder Jaan Tallinn, DeepMind research scientist Viktoriya Krakovna, Boston University Ph.D. candidate in Developmental Sciences Meia Chita-Tegmark, and UCSC physicist Anthony Aguirre. Its mission is “to catalyze and support research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges.”

FLI’s scientific advisory board includes physicist Stephen Hawking, SpaceX CEO Elon Musk, Astronomer Royal Martin Rees, and UC Berkeley Professor of Computer Science/Smith-Zadeh Professor in Engineering Stuart Russell.


Future of Life Institute
| Superintelligence: Science or Fiction? | Elon Musk & Other Great Minds

Elon Musk, Stuart Russell, Ray Kurzweil, Demis Hassabis, Sam Harris, Nick Bostrom, David Chalmers, Bart Selman, and Jaan Tallinn discuss with Max Tegmark (moderator) what likely outcomes might be if we succeed in building human-level AGI [artificial general intelligence] (and beyond), and also what we would like to happen.