First application to pursue genome editing research in human embryos

Human embryos are at the center of a debate over the ethics of gene editing (credit: Dr. Yorgos Nikas/SPL)

The first application to pursue CRISPR/Cas9 genome-editing research in viable human embryos has been submitted to the UK’s fertility regulator by a team of researchers affiliated with the Francis Crick Institute in London.

“This research proposal is a troubling and provocative move,” commented Marcy Darnovsky, PhD, Executive Director of the Center for Genetics and Society.

“Modifying the genes of human embryos is deeply controversial because it can be used for worthwhile research on the one hand, or to produce genetically modified human beings on the other. A global public conversation about preventing such misuses is just getting underway, and this proposal could short-circuit those deliberations.

“It’s illegal in the UK and dozens of other countries to use a modified embryo to initiate a pregnancy, but in others — notably the U.S. — we don’t have that legal protection,” Darnovsky added. “If scientists and the regulatory agency in the UK are serious about responsible use of powerful new gene altering technologies, they won’t be rushing ahead in ways that could open the door to a world of genetically modified humans.”

If the UK Human Fertilisation & Embryology Authority were to issue this license, this would be the first approval of genome editing research on the human germline by a national regulatory body.

The resulting experiments would be the second of their kind in this highly controversial area of research. In April, scientists working out of China published research that they had created the first genetically modified human embryos—these embryos were nonviable, and the results of the CRISPR/Cas9 engineering were highly unsuccessful: producing off target mutations and mosaicism that underlined the limitations of our current understandings of genetics and genomics.

The response from the scientific community and the public after the first human embryo gene editing experiment in April was swift. Many scientists voiced support for either a pause or a moratorium on human germline modification.

On September 14, the National Academies announced that the International Summit on Human Gene Editing scheduled for December will now be co-hosted by the Royal Society (UK) and the Chinese Academy of Sciences.

The CRISPR controversy: faster, cheaper gene editing vs. bioethicists

Clustered regularly interspaced short palindromic repeats (CRISPRs) technology employs a guide RNA to direct the Cas9 enzyme (light blue) to a target DNA sequence. Once there, Cas9 will bind when it finds a protospacer-adjacent motif sequence (red) in the DNA and cut both strands, priming the gene sequence for editing. (credit: Adapted from OriGene Technologies)

Within the past few years, a new technology has made altering genes in plants and animals much easier than before. The tool, called CRISPR/Cas9 or just CRISPR, has spurred a flurry of research that could one day lead to hardier crops and livestock, as well as innovative biomedicines.

But along with potential benefits, it raises red flags, according to an open-access article in Chemical & Engineering News (C&EN), the weekly newsmagazine of the American Chemical Society.

Ann M. Thayer, a senior correspondent at C&EN, notes that scientists have long had the ability to remove, repair or insert genetic material in cells. But the process was time consuming and expensive. CRISPR (“clustered regularly interspaced short palinodromic repeats”) streamlines gene editing dramatically. Its simplicity has enabled far more scientists to get involved in such work. In a short time, they have now used CRISPR to edit genes in insects, plants, fish, rodents and monkeys.

The potential agricultural and medical applications that could result from the tool in the future have attracted the interest of venture capitalists and pharmaceutical companies, the article says. While it seems CRISPR work is moving full-steam ahead, a couple of recent developments could check its growth.

In April, Chinese scientists reported that they had attempted to alter a gene in nonviable human embryos. The announcement sparked bioethicists to call for a more cautious approach to gene editing. The other wrench in the system is an ongoing dispute over who should be awarded the patent for inventing CRISPR. Until these issues are resolved, some investors and researchers will opt to wait on the sidelines.


McGovern Institute for Brain Research at MIT | Genome Editing with CRISPR-Cas9

‘Information sabotage’ on Wikipedia claimed

 

Research has moved online, with more than 80 percent of U.S. students using Wikipedia for research papers, but controversial science information has egregious errors, claim researchers (credit: Pixabay)

Wikipedia entries on politically controversial scientific topics can be unreliable due to “information sabotage,” according to an open-access paper published today in the journal PLOS One.

The authors (Gene E. Likens* and Adam M. Wilson*) analyzed Wikipedia edit histories for three politically controversial scientific topics (acid rain, evolution, and global warming), and four non-controversial scientific topics (the standard model in physics, heliocentrism, general relativity, and continental drift).

“Egregious errors and a distortion of consensus science”

Using nearly a decade of data, the authors teased out daily edit rates, the mean size of edits (words added, deleted, or edited), and the mean number of page views per day. Across the board, politically controversial scientific topics were edited more heavily and viewed more often.

“Wikipedia’s global warming entry sees 2–3 edits a day, with more than 100 words altered, while the standard model in physics has around 10 words changed every few weeks,” Wilson notes. “The high rate of change observed in politically controversial scientific topics makes it difficult for experts to monitor their accuracy and contribute time-consuming corrections.”

While the edit rate of the acid rain article was less than the edit rate of the evolution and global warming articles, it was significantly higher than the non-controversial topics. “In the scientific community, acid rain is not a controversial topic,” said professor Likens. “Its mechanics have been well understood for decades. Yet, despite having ‘semi-protected’ status to prevent anonymous changes, Wikipedia’s acid rain entry receives near-daily edits, some of which result in egregious errors and a distortion of consensus science.”

Wikipedia’s limitations

Likens adds, “As society turns to Wikipedia for answers, students, educators, and citizens should understand its limitations for researching scientific topics that are politically charged. On entries subject to edit-wars, like acid rain, evolution, and global change, one can obtain — within seconds — diametrically different information on the same topic.”

However, the authors note that as Wikipedia matures, there is evidence that the breadth of its scientific content is increasingly based on source material from established scientific journals. They also note that Wikipedia employs algorithms to help identify and correct blatantly malicious edits, such as profanity. But in their view, it remains to be seen how Wikipedia will manage the dynamic, changing content that typifies politically charged science topics.

To help readers critically evaluate Wikipedia content, Likens and Wilson suggest identifying entries that are known to have significant controversy or edit wars. They also recommend quantifying the reputation of individual editors. In the meantime, users are urged to cast a critical eye on Wikipedia source material, which is found at the bottom of each entry.

Wikipedia editors not impressed

In the Wikipedia “User_talk:Jimbo_Wales” page, several Wikipedia editors questioned the PLOS One authors’ statistical accuracy and conclusions, and noted that the data is three years out of date. “I don’t think this dataset can make any claim about controversial subjects at all,” one editor said. “It simply looks at too few articles, and there are too many explanations.”

“It has long been a source of bewilderment to me that we allow climate change denialists to run riot on Wikipedia,” said another.

* Dr. Gene E. Likens is President Emeritus of the Cary Institute of Ecosystem Studies and a Distinguished Research Professor at the University of Connecticut, Storrs. Likens co-discovered acid rain in North America, and counts among his accolades a National Medal of Science, a Tyler Prize, and elected membership in the National Academy of Sciences. Dr. Adam M. Wilson is a geographer at the University of Buffalo.


Abstract of Content Volatility of Scientific Topics in Wikipedia: A Cautionary Tale

Wikipedia has quickly become one of the most frequently accessed encyclopedic references, despite the ease with which content can be changed and the potential for ‘edit wars’ surrounding controversial topics. Little is known about how this potential for controversy affects the accuracy and stability of information on scientific topics, especially those with associated political controversy. Here we present an analysis of the Wikipedia edit histories for seven scientific articles and show that topics we consider politically but not scientifically “controversial” (such as evolution and global warming) experience more frequent edits with more words changed per day than pages we consider “noncontroversial” (such as the standard model in physics or heliocentrism). For example, over the period we analyzed, the global warming page was edited on average (geometric mean ±SD) 1.9±2.7 times resulting in 110.9±10.3 words changed per day, while the standard model in physics was only edited 0.2±1.4 times resulting in 9.4±5.0 words changed per day. The high rate of change observed in these pages makes it difficult for experts to monitor accuracy and contribute time-consuming corrections, to the possible detriment of scientific accuracy. As our society turns to Wikipedia as a primary source of scientific information, it is vital we read it critically and with the understanding that the content is dynamic and vulnerable to vandalism and other shenanigans.

Should humans be able to marry robots?

(credit: AMC)

The Supreme Court’s recent 5–4 decision in Obergefell v. Hodges legalizing same-sex marriage raises the interesting question: what’s next on the “slippery slope”? Robot-human marriages? Robot-robot marriages?

Why yes, predicts on Slate.

“There has recently been a burst of cogent accounts of human-robot sex and love in popular culture: Her and Ex Machina, the AMC drama series Humans, and the novel Love in the Age of Mechanical Reproduction,” he points out, along with David Levy’s 2007 book, Love and Sex With Robots.

But will the supremes’ decision open the door to robot-human marriage? Marchant explains that the decision was based on an analysis of four “principles and traditions”:

  • Individual autonomy, the right of each of us to decide our own private choices. Check.
  • Between “two persons.” “Marriage responds to the universal fear that a lonely person might call out only to find no one there,” the court said. “It offers the hope of companionship and understanding and assurance that while both still live there will be someone to care for the other.” Existing care robots would exceed some people in meeting that criterion. Check.
  • Marriage safeguards children and families. Could a future robot be an effective parent? Why not? Check.
  • Marriage is “central to many practical and legal realities of modern life, such as taxation, inheritance, property rights, hospital access and insurance coverage.” Hmm … sounds like a legal/accounting robot would excel in those areas. Double check.

“While few people would understand or support robot-human intimacy today, as robots get more sophisticated and humanlike, more and more people will find love, happiness, and intimacy in the arms of a machine.”

As HUMANS viewers know, at least in fiction, “Robot sex and love is coming, and robot-human marriage will likely not be far behind.”

 

 

 

AI and robotics researchers call for global ban on autonomous weapons

More than 1,000 leading artificial intelligence (AI) and robotics researchers and others, including Stephen Hawking and Elon Musk, just signed and published an open letter from the Future of Life Institute (FLI) today calling for a ban on offensive autonomous weapons.

FLI defines “autonomous weapons” as those that select and engage targets without human intervention, such as armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions.

The researchers believe that AI technology has reached a point where the deployment of such systems is feasible within years, not decades, and that the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.

Only be a matter of time until they appear on the black market

“If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce.

“It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity.”

The proposed ban is similar to the broadly supported international agreements that have successfully prohibited chemical, biological weapons, blinding laser weapons, and space-based nuclear weapons.

“We believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control,” the letter concludes.

List of signatories