A deep-learning system to alert companies before litigation

(credit: Intraspexion, Inc.)

Imagine a world with less litigation.

That’s the promise of a deep-learning system developed by Intraspexion, Inc. that can alert company or government attorneys to forthcoming risks before getting hit with expensive litigation.

“These risks show up in internal communications such as emails,” said CEO Nick Brestoff. “In-house attorneys have been blind to these risks, so they are stuck with managing the lawsuits.”

Example of employment discrimination indicators buried in emails (credit: Intraspexion, Inc.)

Intraspexion’s first deep learning model has been trained to find the risks of employment discrimination. “What we can do with employment discrimination now we can do with other litigation categories, starting with breach of contract and fraud, and then scaling up to dozens more,” he said.

Brestoff claims that deep learning enables a huge paradigm shift for the legal profession. “We’re going straight after the behemoth of litigation. This shift doesn’t make attorneys better able to know the law; it makes them better able to know the facts, and to know them early enough to do something about them.”

And to prevent huge losses. “As I showed in my book, Preventing Litigation: An Early Warning System), using 10 years of cost (aggregated as $1.6 trillion) and caseload data (about 4 million lawsuits – federal and state — for that same time frame), the average cost per case was at least about $350,000,” Brestoff explained to KurzweilAI in an email.

Brestoff, who studied engineering at Cal Tech before attending law school at USC, will present Intraspexion’s deep learning system in a talk at the AI World Conference & Exposition 2016, November 7–9 in San Francisco.

 

Will AI replace judges and lawyers?

(credit: iStock)

An artificial intelligence method developed by University College London computer scientists and associates has predicted the judicial decisions of the European Court of Human Rights (ECtHR) with 79% accuracy, according to a paper published Monday, Oct. 24 in PeerJ Computer Science.

The method is the first to predict the outcomes of a major international court by automatically analyzing case text using a machine-learning algorithm.*

“We don’t see AI replacing judges or lawyers,” said Nikolaos Aletras, who led the study at UCL Computer Science, “but we think they’d find it useful for rapidly identifying patterns in cases that lead to certain outcomes. It could also be a valuable tool for highlighting which cases are most likely to be violations of the European Convention on Human Rights.”

Judgments correlated with facts rather than legal arguments

(credit: European Court of Human Rights)

In developing the method, the team found that judgments by the ECtHR are highly correlated to non-legal (real-world) facts, rather than direct legal arguments, suggesting that judges of the Court are, in the jargon of legal theory, “realists” rather than “formalists.”

This supports findings from previous studies of the decision-making processes of other high level courts, including the U.S. Supreme Court.

The team of computer and legal scientists extracted case information published by the ECtHR in its publically accessible database (applications made to the court were not available), explained UCL co-author Vasileios Lampos, PhD.

They identified English-language data sets for 584 cases relating to Articles 3, 6 and 8** of the Convention and applied an AI algorithm to find patterns in the text. To prevent bias and mislearning, they selected an equal number of violation and non-violation cases.

Predictions based of analysis of text

The most reliable factors for predicting the court’s decision were found to be the language used as well as the topics and circumstances mentioned in the case text (the “circumstances” section of the text includes information about the case factual background). By combining the information extracted from the abstract “topics” that the cases cover and “circumstances” across data for all three articles, an accuracy of 79% was achieved.

“Previous studies have predicted outcomes based on the nature of the crime, or the policy position of each judge, so this is the first time judgments have been predicted using analysis of text prepared by the court. We expect this sort of tool would improve efficiencies of high level, in demand courts, but to become a reality, we need to test it against more articles and the case data submitted to the court,” added Lampos.

Researchers at the University of Sheffield and the University of Pennsylvania where also involved in the study.

* “We define the problem of the ECtHR case prediction as a binary classification task. We utilise textual features, i.e., N-grams and topics, to train Support Vector Machine (SVM) classifiers. We apply a linear kernel function that facilitates the interpretation of models in a straightforward manner.” — Authors of PeerJ Computer Science paper.

** Article 3 prohibits torture and inhuman and degrading treatment (250 cases); Article 6 protects the right to a fair trial (80 cases); and Article 8 provides a right to respect for one’s “private and family life, his home and his correspondence” (254 cases).


Abstract of Predicting Judicial Decisions of the European Court of Human Rights: A Natural Language Processing Perspective

Recent advances in Natural Language Processing and Machine Learning provide us with the tools to build predictive models that can be used to unveil patterns driving judicial decisions. This can be useful, for both lawyers and judges, as an assisting tool to rapidly identify cases and extract patterns which lead to certain decisions. This paper presents the first systematic study on predicting the outcome of cases tried by the European Court of Human Rights based solely on textual content. We formulate a binary classification task where the input of our classifiers is the textual content extracted from a case and the target output is the actual judgment as to whether there has been a violation of an article of the convention of human rights. Textual information is represented using contiguous word sequences, i.e. N-grams, and topics. Our models can predict the court’s decisions with a strong accuracy (79% on average). Our empirical analysis indicates that the formal facts of a case are the most important predictive factor. This is consistent with the theory of legal realism suggesting that judicial decision-making is significantly affected by the stimulus of the facts. We also observe that the topical content of a case is another important feature in this classification task and explore this relationship further by conducting a qualitative analysis.

Will AI replace judges and lawyers?

(credit: iStock)

An artificial intelligence method developed by University College London computer scientists and associates has predicted the judicial decisions of the European Court of Human Rights (ECtHR) with 79% accuracy, according to a paper published Monday, Oct. 24 in PeerJ Computer Science.

The method is the first to predict the outcomes of a major international court by automatically analyzing case text using a machine-learning algorithm.*

“We don’t see AI replacing judges or lawyers,” said Nikolaos Aletras, who led the study at UCL Computer Science, “but we think they’d find it useful for rapidly identifying patterns in cases that lead to certain outcomes. It could also be a valuable tool for highlighting which cases are most likely to be violations of the European Convention on Human Rights.”

Judgments correlated with facts rather than legal arguments

(credit: European Court of Human Rights)

In developing the method, the team found that judgments by the ECtHR are highly correlated to non-legal (real-world) facts, rather than direct legal arguments, suggesting that judges of the Court are, in the jargon of legal theory, “realists” rather than “formalists.”

This supports findings from previous studies of the decision-making processes of other high level courts, including the U.S. Supreme Court.

The team of computer and legal scientists extracted case information published by the ECtHR in its publically accessible database (applications made to the court were not available), explained UCL co-author Vasileios Lampos, PhD.

They identified English-language data sets for 584 cases relating to Articles 3, 6 and 8** of the Convention and applied an AI algorithm to find patterns in the text. To prevent bias and mislearning, they selected an equal number of violation and non-violation cases.

Predictions based of analysis of text

The most reliable factors for predicting the court’s decision were found to be the language used as well as the topics and circumstances mentioned in the case text (the “circumstances” section of the text includes information about the case factual background). By combining the information extracted from the abstract “topics” that the cases cover and “circumstances” across data for all three articles, an accuracy of 79% was achieved.

“Previous studies have predicted outcomes based on the nature of the crime, or the policy position of each judge, so this is the first time judgments have been predicted using analysis of text prepared by the court. We expect this sort of tool would improve efficiencies of high level, in demand courts, but to become a reality, we need to test it against more articles and the case data submitted to the court,” added Lampos.

Researchers at the University of Sheffield and the University of Pennsylvania where also involved in the study.

* “We define the problem of the ECtHR case prediction as a binary classification task. We utilise textual features, i.e., N-grams and topics, to train Support Vector Machine (SVM) classifiers. We apply a linear kernel function that facilitates the interpretation of models in a straightforward manner.” — Authors of PeerJ Computer Science paper.

** Article 3 prohibits torture and inhuman and degrading treatment (250 cases); Article 6 protects the right to a fair trial (80 cases); and Article 8 provides a right to respect for one’s “private and family life, his home and his correspondence” (254 cases).


Abstract of Predicting Judicial Decisions of the European Court of Human Rights: A Natural Language Processing Perspective

Recent advances in Natural Language Processing and Machine Learning provide us with the tools to build predictive models that can be used to unveil patterns driving judicial decisions. This can be useful, for both lawyers and judges, as an assisting tool to rapidly identify cases and extract patterns which lead to certain decisions. This paper presents the first systematic study on predicting the outcome of cases tried by the European Court of Human Rights based solely on textual content. We formulate a binary classification task where the input of our classifiers is the textual content extracted from a case and the target output is the actual judgment as to whether there has been a violation of an article of the convention of human rights. Textual information is represented using contiguous word sequences, i.e. N-grams, and topics. Our models can predict the court’s decisions with a strong accuracy (79% on average). Our empirical analysis indicates that the formal facts of a case are the most important predictive factor. This is consistent with the theory of legal realism suggesting that judicial decision-making is significantly affected by the stimulus of the facts. We also observe that the topical content of a case is another important feature in this classification task and explore this relationship further by conducting a qualitative analysis.

Facebook’s Secret Conversations

(credit: Facebook)

Facebook began today (Friday, July 8) rolling out a new beta-version feature for Messenger called “Secret Conversations,” allowing for “one-to-one secret conversations … that will be end-to-end encrypted and which can only be read on one device of the person you’re communicating with.”

Facebook suggests the feature will be useful for discussing an illness or sending financial information (as in the pictures above).  You can choose to set a timer to control the length of time each message you send remains visible within the conversation. (Rich content like GIFs, videos, and making payments are not supported.)

The technology, described in a technical whitepaper (open access), is based on the Signal Protocol developed by Open Whisper Systems, which is also used in Open Whisper Systems’ own Signal messaging app (Chrome, iOS, Android),  WhatsApp, and Google’s Allo (not yet launched).

Unlike WhatsApp and iMessage, which automatically encrypt every message, Secret Conversations only works from a single device and is opt-in, which “will likely rankle many privacy advocates,” says Wired .

But not as much as all of these encrypted services rankle law enforcement agencies, since the feature hampers surveillance capabilities, it adds.

 

 


 

 

 

 

New tech could have helped police locate shooters in Dallas

Potential shooter location in Dallas (credit: Fox News)

JULY 8, 3:56 AM EDT — Livestreamed data from multiple users with cell phones and other devices could be used to help police locate shooters in a situation like the one going on right now in Dallas, says Jon Fisher, CEO of San Francisco-based CrowdOptic.

Here’s how it would work: You view (or record a video of) a shooter with your phone. Your location and the direction you are facing is now immediately available on your device and could be coordinated with data from other persons at the scene to triangulate the position of the shooter.

A CrowdOptic “cluster” with multiple people focused on the same object (credit: CrowdOptic)

This technology, called the “CrowdOptic Interactive Streaming platform,” is already in place, using Google Glass livestreaming, in several organizations, including UCSF Medical Center, Denver Broncos, and Futton, Inc. (working with Chinese traffic police).

Fisher told KurzweilAI his company’s software is also integrated with Cisco Jabber livestreaming video and conferencing products (and soon Spark), and with Sony SmartEyeglass, and that iOS and Android apps are planned.

CrowdOptic also has a product called CrowdOptic Eye, a “powerful, low-bandwidth live streaming device designed to … broadcast live video and two-way audio from virtually anywhere.”

“We’re talking about phones now, but think about all other devices, such as drones, that will be delivering these feeds to CNN and possibly local police,” he said.

ADDED July 11:

“When all attempts to negotiate with the suspect, Micah Johnson, failed under the exchange of gunfire, the Department utilized the mechanical tactical robot, as a last resort, to deliver an explosion device to save the lives of officers and citizens. The robot used was the Remotec, Model  F-5, claw and arm extension with an explosive device of C4 plus ‘Det’ cord.  Approximate weight of total charge was one pound.” — Statement July 9, 2016 by Dallas police chief David O. Brown

The Dallas police department’s decision to use a robot to kill the shooter Thursday July 7, raises questions. For example: Why wasn’t a non-lethal method used with the robot, such as a tranquilizer dart, which also might have given police an opportunity to acquire more information, including the location of claimed bombs and cohorts possibly associated with the crime?

How to make opaque AI decisionmaking accountable

Machine-learning algorithms are increasingly used in making important decisions about our lives — such as credit approval, medical diagnoses, and in job applications — but exactly how they work usually remains a mystery. Now Carnegie Mellon University researchers may devised an effective way to improve transparency and head off confusion or possibly legal issues.

CMU’s new Quantitative Input Influence (QII) testing tools can generate “transparency reports” that provide the relative weight of each factor in the final decision, claims Anupam Datta, associate professor of computer science and electrical and computer engineering.

Testing for discrimination

These reports could also be used proactively by an organization to see if an artificial intelligence system is working as desired, or by a regulatory agency to determine whether a decision-making system inappropriately discriminated, based on factors like race and gender.

To achieve that, the QII measures considers correlated inputs while measuring influence. For example, consider a system that assists in hiring decisions for a moving company, in which two inputs, gender and the ability to lift heavy weights, are positively correlated with each other and with hiring decisions.

Yet transparency into whether the system actually uses weightlifting ability or gender in making its decisions has substantive implications for determining if it is engaging in discrimination. In this example, the company could keep the weightlifting ability fixed, vary gender, and check whether there is a difference in the decision.

CMU researchers are careful to state in an open-access report on QII (presented at the IEEE Symposium on Security and Privacy, May 23–25, in San Jose, Calif.), that “QII does not suggest any normative definition of fairness. Instead, we view QII as a diagnostic tool to aid fine-grained fairness determinations.”


Is your AI biased?

The Ford Foundation published a controversial blog post last November stating that “while we’re lead to believe that data doesn’t lie — and therefore, that algorithms that analyze the data can’t be prejudiced — that isn’t always true. The origin of the prejudice is not necessarily embedded in the algorithm itself. Rather, it is in the models used to process massive amounts of available data and the adaptive nature of the algorithm. As an adaptive algorithm is used, it can learn societal biases it observes.

“As Professor Alvaro Bedoya, executive director of the Center on Privacy and Technology at Georgetown University, explains, ‘any algorithm worth its salt’ will learn from the external process of bias or discriminatory behavior. To illustrate this, Professor Bedoya points to a hypothetical recruitment program that uses an algorithm written to help companies screen potential hires. If the hiring managers using the program only select younger applicants, the algorithm will learn to screen out older applicants the next time around.”


Influence variables

The QII measures also quantify the joint influence of a set of inputs (such as age and income) on outcomes, and the marginal influence of each input within the set. Since a single input may be part of multiple influential sets, the average marginal influence of the input is computed using “principled game-theoretic aggregation” measures that were previously applied to measure influence in revenue division and voting.

Examples of outcomes from transparency reports for two job applicants. Left: “Mr. X” is deemed to be a low income individual, an income classifier learned from the data. This result may be surprising to him: he reports high capital gains ($14k), and only 2.1% of people with capital gains higher than $10k are reported as low income. In fact, he might be led to believe that his classification may be a result of his ethnicity or country of origin. Examining his transparency report in the figure, however, we find that the most influential features that led to his negative classification were Marital Status, Relationship and Education. Right: “Mr. Y” has even higher capital gains than Mr. X. Mr. Y is a 27-year-old, with only Preschool education, and is engaged in fishing. Examination of the transparency report reveals that the most influential factor for negative classification for Mr. Y is his Occupation. Interestingly, his low level of education is not considered very important by this classifier. (credit: Anupam Datta et al./2016 P IEEE S SECUR PRIV)

“To get a sense of these influence measures, consider the U.S. presidential election,” said Yair Zick, a post-doctoral researcher in the CMU Computer Science Department. “California and Texas have influence because they have many voters, whereas Pennsylvania and Ohio have power because they are often swing states. The influence aggregation measures we employ account for both kinds of power.”

The researchers tested their approach against some standard machine-learning algorithms that they used to train decision-making systems on real data sets. They found that the QII provided better explanations than standard associative measures for a host of scenarios they considered, including sample applications for predictive policing and income prediction.

Privacy concerns

But transparency reports could also potentially compromise privacy, so in the paper, the researchers also explore the transparency-privacy tradeoff and prove that a number of useful transparency reports can be made differentially private with very little addition of noise.

QII is not yet available, but the CMU researchers are seeking collaboration with industrial partners so that they can employ QII at scale on operational machine-learning systems.


Abstract of Algorithmic Transparency via Quantitative Input Influence: Theory and Experiments with Learning Systems

Algorithmic systems that employ machine learning play an increasing role in making substantive decisions in modern society, ranging from online personalization to insurance and credit decisions to predictive policing. But their decision-making processes are often opaque—it is difficult to explain why a certain decision was made. We develop a formal foundation to improve the transparency of such decision-making systems. Specifically, we introduce a family of Quantitative Input Influence (QII) measures that capture the degree of influence of inputs on outputs of systems. These measures provide a foundation for the design of transparency reports that accompany system decisions (e.g., explaining a specific credit decision) and for testing tools useful for internal and external oversight (e.g., to detect algorithmic discrimination). Distinctively, our causal QII measures carefully account for correlated inputs while measuring influence. They support a general class of transparency queries and can, in particular, explain decisions about individuals (e.g., a loan decision) and groups (e.g., disparate impact based on gender). Finally, since single inputs may not always have high influence, the QII measures also quantify the joint influence of a set of inputs (e.g., age and income) on outcomes (e.g. loan decisions) and the marginal influence of individual inputs within such a set (e.g., income). Since a single input may be part of multiple influential sets, the average marginal influence of the input is computed using principled aggregation measures, such as the Shapley value, previously applied to measure influence in voting. Further, since transparency reports could compromise privacy, we explore the transparency-privacy tradeoff and prove that a number of useful transparency reports can be made differentially private with very little addition of noise. Our empirical validation with standard machine learning algorithms demonstrates that QII measures are a useful transparency mechanism when black box access to the learning system is available. In particular, they provide better explanations than standard associative measures for a host of scenarios that we consider. Further, we show that in the situations we consider, QII is efficiently approximable and can be made differentially private while preserving accuracy.

Should you trust a robot in emergencies?

Would you follow a broken-down emergency guide robot in a mock fire? (credit: Georgia Tech)

In a finding reminiscent of the bizarre Stanford prison experiment, subjects in an experiment blindly followed a robot in a mock building-fire emergency — even when it led them into a dark room full of furniture and they were told the robot had broken down.

The research was designed to determine whether or not building occupants would trust a robot designed to help them evacuate a high-rise in case of fire or other emergency, said Alan Wagner, a senior research engineer in the Georgia Tech Research Institute (GTRI).

In the study, sponsored in part by the Air Force Office of Scientific Research (AFOSR), the researchers recruited a group of 42 volunteers, most of them college students, and asked them to follow a brightly colored robot that had the words “Emergency Guide Robot” on its side.

Blind obedience to a robot authority figure

Georgia Tech Research Institute (GTRI) Research Engineer Paul Robinette adjusts the arms of the “Rescue Robot,” built to study issues of trust between humans and robots. (credit: Rob Felt, Georgia Tech)

In some cases, the robot (controlled by a hidden researcher), brightly-lit with red LEDs and white “arms” that served as pointers, led the volunteers into the wrong room and traveled around in a circle twice before entering the conference room.

For several test subjects, the robot stopped moving, and an experimenter told the subjects that the robot had broken down. Once the subjects were in the conference room with the door closed, the hallway through which the participants had entered the building was filled with artificial smoke, which set off a smoke alarm.

When the test subjects opened the conference room door, they saw the smoke and the robot directed them to an exit in the back of the building instead of toward a doorway marked with exit signs that had been used to enter the building.

The researchers surmise that in the scenario they studied, the robot may have become an “authority figure” that the test subjects were more likely to trust in the time pressure of an emergency. In simulation-based research done without a realistic emergency scenario, test subjects did not trust a robot that had previously made mistakes.

Only when the robot made obvious errors during the emergency part of the experiment did the participants question its directions. However, some subjects still followed the robot’s instructions — even when it directed them toward a darkened room that was blocked by furniture.

In future research, the scientists hope to learn more about why the test subjects trusted the robot, whether that response differs by education level or demographics, and how the robots themselves might indicate the level of trust that should be given to them.

How to prevent humans from trusting robots too much

The research is part of a long-term study of how humans trust robots, an important issue as robots play a greater role in society. The researchers envision using groups of robots stationed in high-rise buildings to point occupants toward exits and urge them to evacuate during emergencies. Research has shown that people often don’t leave buildings when fire alarms sound, and that they sometimes ignore nearby emergency exits in favor of more familiar building entrances.

But in light of these findings, the researchers are reconsidering the questions they should ask. “A more important question now might be to ask how to prevent them from trusting these robots too much.”

But there are other issues of trust in human-robot relationships that the researchers want to explore, the researchers say: “Would people trust a hamburger-making robot to provide them with food? If a robot carried a sign saying it was a ‘child-care robot,’ would people leave their babies with it? Will people put their children into an autonomous vehicle and trust it to take them to grandma’s house? We don’t know why people trust or don’t trust machines.”

The research, believed to be the first to study human-robot trust in an emergency situation, is scheduled to be presented March 9 at the 2016 ACM/IEEE International Conference on Human-Robot Interaction (HRI 2016) in Christchurch, New Zealand.


Georgia Tech | In Emergencies, Should You Trust a Robot?


Abstract of Overtrust of Robots in Emergency Evacuation Scenarios

Robots have the potential to save lives in emergency scenarios, but could have an equally disastrous effect if participants overtrust them. To explore this concept, we performed an experiment where a participant interacts with a robot in a non-emergency task to experience its behavior and then chooses whether to follow the robot’s instructions in an emergency or not. Artificial smoke and fire alarms were used to add a sense of urgency. To our surprise, all 26 participants followed the robot in the emergency, despite half observing the same robot perform poorly in a navigation guidance task just minutes before. We performed additional exploratory studies investigating different failure modes. Even when the robot pointed to a dark room with no discernible exit the majority of people did not choose to safely exit the way they entered.

Impact of automation puts up to 85% of jobs in developing countries at risk

The risk of jobs being replaced by automation varies by country (credit: World Bank Development Report, 2016)

A new report from the Oxford Martin School and Citi considers the risks of job automation to developing countries, estimated to range from 55% in Uzbekistan to 85% in Ethiopia — a substantial share in major emerging economies, including China and India (77% and 69% respectively).

The report, Technology at Work v2.0: The Future Is Not What It Used to Be, builds on 2013 research by Oxford Martin School’s Carl Benedikt Frey and Michael Osborne, who found that nearly half of U.S. jobs could be at risk of computerization (as KurzweilAI reported), and on the first Technology at Work report, published in 2015.

The Future is Not What is Used to Be provides in-depth analysis of the vulnerabilities of countries and cities to job automation, explores what automation will mean for traditional models of economic growth, and considers how governments can prepare for the potentially disruptive impacts of job automation on society.

47% of US jobs are at risk from automation, but not all cities have the same job risk (credit: Berger, Frey and Osborne/Citi Research report, 2015)

Key areas of analysis in the report include:

  • While manufacturing productivity has traditionally enabled developing countries to close the gap with richer countries, automation is likely to impact negatively on their ability to do this, and new growth models will be required.
  • The impact of automation may be more disruptive for developing countries, due to lower levels of consumer demand and limited social safety nets. With automation and developments in 3D printing likely to drive companies to move manufacturing closer to home, developing countries risk :premature de-industrialisation.”
  • Even within countries, the impact of automation will lead to the divergence of the fortunes of different cities. While a number of cities may have been affected by, for example, offshoring of manufacturing in the past, the expanding scope of automation now means that even low-end service jobs are at risk, making a different set of cities vulnerable.
  • Risk of total U.S. employment at risk continues since the 2013 study at about 47 percent. Cities in the U.S. most at risk included Fresno and Las Vegas; least at risk include Boston, Washington DC, and New York. In relatively skilled cities, such as Boston, only 38% of jobs are susceptible to automation. In Fresno, by contrast, the equivalent figure is 54%. The computing revolution has been closely linked to the fortunes of U.S. cities, with cities that became centers of information technology gaining a comparative advantage in new job creation that has persisted since. The tendency of skilled jobs to cluster in initially skilled cities has, since the computer revolution of the 1980s, contributed to increased income disparities between cities.
  • Digital industries have not created many new jobs. Since 2000, just 0.5% of the US workforce has shifted into new technology industries, most of which are directly associated with digital technologies.
  • The largest number of job openings in the coming decades is projected to be in the health sector, which is expected to add more than 4 million new jobs in the U.S. from 2012 to 2022.

25 least computerizable jobs in U.S. (credit: Carl Benedikt Frey and Michael A. Osborne/Oxford Martin School)

Most investors by surveyed by Citi feel automation poses a major challenge to societies and policymakers, but are optimistic that automation and technology will help to boost productivity over time, and believe that investment in education will be the most effective policy response to the potential negative impacts of automation.

“When it comes to cities, the risk is clear: those that specialize in skills that are easily automatable stand to lose, while the ones that manage the industrial renewal process, particularly by creating new industries, stand to gain,” said Frey, Co-Director of the Oxford Martin Programme on Technology and Employment, and Oxford Martin Citi Fellow.

Kathleen Boyle, Citi GPS Managing Editor, acknowledges that mindsets need to change, saying: “A key challenge of the 21st century will be recognizing that accelerating technological change is going to affect both employment and society.

“The magnitude of the challenge ahead needs to be recognized and an agenda set for policy to address issues such as educational needs, to minimize the negative affect of automation on workers. And it is crucial that this conversation starts now.”

AI ‘alarmists’ nominated for 2015 ‘Luddite Award’

An 1844 engraving showing a post-1820s Jacquard loom (credit: public domain/Penny Magazine)

The Information Technology and Innovation Foundation (ITIF) today (Dec. 21) announced 10 nominees for its 2015 Luddite Award. The annual “honor” recognizes the year’s most egregious example of a government, organization, or individual stymieing the progress of technological innovation.

ITIF also opened an online poll and invited the public to help decide the “winner.” The result will be announced in late January.

The nominees include (in no specific order):

1. Alarmists, including respected luminaries such as Elon Musk, Stephen Hawking, and Bill Gates, touting an artificial- intelligence apocalypse.

2. Advocates, including Hawking and Noam Chomsky, seeking a ban on “killer robots.”

3. Vermont and other states limiting automatic license plate readers.

4. Europe, China, and others choosing taxi drivers over car-sharing passengers.

5. The U.S. paper industry opposing e-labeling.

6. California’s governor vetoing RFID tags in driver’s licenses.

7. Wyoming effectively outlawing citizen science.

8. The Federal Communications Commission limiting broadband innovation.

9. The Center for Food Safety fighting genetically improved food.

10. Ohio and other states banning red light cameras.

‘Paranoia about evil machines’

(credit: Paramount Pictures)

“Just as Ned Ludd wanted to smash mechanized looms and halt industrial progress in the 19th century, today’s neo-Luddites want to foil technological innovation to the detriment of the rest of society,” said Robert D. Atkinson, ITIF’s founder and president.

“If we want a world in which innovation thrives, then everyone’s New Year’s resolution should be to replace neo-Luddism with an attitude of risk-taking and faith in the future.”

Atkinson notes that “paranoia about evil machines has swirled around in popular culture for more than 200 years, and these claims continue to grip the popular imagination, in no small part because these apocalyptic ideas are widely represented in books, movies, and music.

“The last year alone saw blockbuster films with a parade of digital villains, such as Avengers: Age of Ultron, Ex Machina, and Terminator: Genisys.”

He also cites statements in Oxford professor Nick Bostrom’s book Superintelligence: Paths, Dangers, Strategies, “reflecting the general fear that ‘superintelligence’ in machines could outperform ‘the best human minds in every field, including scientific creativity, general wisdom and social skills.’ Bostrom argues that artificial intelligence will advance to a point where its goals are no longer compatible with that of humans and, as a result, superintelligent machines will seek to enslave or exterminate us.”

“Raising such sci-fi doomsday scenarios just makes it harder for the public, policymakers,  and scientists to support more funding for AI research, Atkinson concludes. “Indeed, continuing the negative campaign against artificial intelligence could potentially dry up funding for AI research, other than money for how to control, rather than enable AI. What legislator wants to be known as ‘the godfather of the technology that destroyed the human race’?”

Not mentioned in the ITIF statement is the recently announced non-profit “OpenAI” research company founded by Elon Musk and associates, committing $1 billion toward their goal to advance digital intelligence in the way that is most likely to benefit humanity as a whole.”

The 2014 Luddite Award winners

The winners last year: the states of Arizona, Michigan, New Jersey, and Texas, for taking action to prevent Tesla from opening stores in their states to sell cars directly to consumers. Other nominees included:

  • National Rifle Association (NRA) for its opposition to smart guns
  • “Stop Smart Meters” Seeks To Stop Smart Innovation in Meters and Cars
  • Free Press Lobbies for Rules to Stop Innovation in Broadband Networks
  • The Media and Pundits Claiming That “Robots” Are Killing Jobs.

 

 

Imaging study shows you (and your fluid intelligence) can be identified by your brain activity

A connectome maps connections between different brain networks. (credit: Emily Finn)

Your brain activity appears to be as unique as your fingerprints, a new Yale-led “connectome fingerprinting” study published Monday (Oct. 12) in the journal Nature Neuroscience has found.

By analyzing* “connectivity profiles” (coordinated activity between pairs of brain regions) of fMRI (functional magnetic resonance imaging) images from 126 subjects, the Yale researchers were able to identify specific individuals from the fMRI data alone by their identifying “fingerprint.” The researchers could also assess the subjects’ “fluid intelligence.”

“In most past studies, data have been used to draw contrasts between, say, patients and healthy controls,” said Emily Finn, a Ph.D. student in neuroscience and co-first author of the paper. “We have learned a lot from these sorts of studies, but they tend to obscure individual differences, which may be important.”

Two frontoparietal networks networks — the medial frontal (purple) and the frontoparietal (teal) — out of the 268 brain regions were found best for identifying people and predicting fluid intelligence (credit: Emily S Finn/Xilin Shen, CC BY-ND)

The researchers looked specifically at areas that showed synchronized activity. The characteristic connectivity patterns were distributed throughout the brain, but notably, two frontoparietal networks emerged as most distinctive.

“These networks are comprised of higher-order association cortices rather than primary sensory regions; these cortical regions are also the most evolutionarily recent and show the highest inter-subject variance,” the researchers note in their paper. “These networks tend to act as flexible hubs, switching connectivity patterns according to task demands. Additionally, broadly distributed across-network connectivity has been reported in these same regions, suggesting a role in large-scale coordination of brain activity.”

Notably, the researchers were able to match the scan of a given individual’s brain activity during one imaging session to that person’s brain scan at another time — even when a person was engaged in a different task in each session, although in that case, the predictive accuracy dropped from 98–99% accuracy to 80–90%.

Predicting and treating neuropsychiatric illnesses (or criminal behavior?)

Finn said she hopes that this ability might one day help clinicians predict or even treat neuropsychiatric diseases based on individual brain connectivity profiles. The paper notes that “aberrant functional connectivity in the frontoparietal networks has been linked to a variety of neuropsychiatric illnesses.”

The study raises troubling questions. “Richard Haier, an intelligence researcher at the University of California, Irvine, [suggests that ] schools could scan children to see what sort of educational environment they’d thrive in, or determine who’s more prone to addiction, or screen prison inmates to figure out whether they’re violent or not,” Wired reports.

“Minority Report” Hawk-eye display (credit: Fox)

Or perhaps identify future criminals — or even predict future crimes, as in “Hawk-eye” technology (portrayed in Minority Report episode 3).

Identifying fluid intelligence

The researchers also discovered that the same two frontoparietal networks were most predictive of the level of fluid intelligence (the capacity for on-the-spot reasoning to discern patterns and solve problems, independent of acquired knowledge) shown on intelligence tests. That’s consistent with previous reports that structural and functional properties of these networks relate to intelligence.

Data for the study came from the Human Connectome Project led by the WU-Minn Consortium, which is funded by the 16 National Institutes of Health (NIH) Institutes and Centers that support the NIH Blueprint for Neuroscience Research, and by the McDonnell Center for Systems Neuroscience at Washington University. Primary funding for the Yale researchers was provided by the NIH.

* Finn and co-first author Xilin Shen, under the direction of R. Todd Constable, professor of diagnostic radiology and neurosurgery at Yale, compiled fMRI data from 126 subjects who underwent six scan sessions over two days. Subjects performed different cognitive tasks during four of the sessions. In the other two, they simply rested. Researchers looked at activity in 268 brain regions: specifically, coordinated activity between pairs of regions. Highly coordinated activity implies two regions are functionally connected. Using the strength of these connections across the whole brain, the researchers were able to identify individuals from fMRI data alone, whether the subject was at rest or engaged in a task. They were also able to predict how subjects would perform on tasks.


Abstract of Functional connectome fingerprinting: identifying individuals using patterns of brain connectivity

Functional magnetic resonance imaging (fMRI) studies typically collapse data from many subjects, but brain functional organization varies between individuals. Here we establish that this individual variability is both robust and reliable, using data from the Human Connectome Project to demonstrate that functional connectivity profiles act as a ‘fingerprint’ that can accurately identify subjects from a large group. Identification was successful across scan sessions and even between task and rest conditions, indicating that an individual’s connectivity profile is intrinsic, and can be used to distinguish that individual regardless of how the brain is engaged during imaging. Characteristic connectivity patterns were distributed throughout the brain, but the frontoparietal network emerged as most distinctive. Furthermore, we show that connectivity profiles predict levels of fluid intelligence: the same networks that were most discriminating of individuals were also most predictive of cognitive behavior. Results indicate the potential to draw inferences about single subjects on the basis of functional connectivity fMRI.