• From Artificial Lawyer: Thomson Reuters is again turning to AI tools, now with a contract remediation system to help companies review and repaper legal agreements ahead of Brexit. In this case it will be using AI company Logical Construct, which leverages a combination of natural language processing (NLP) and machine learning techniques to achieve its extraction results.

 

  • From Patent Docs: FDA Permits Marketing of First AI-based Medical Device; Signals Fast Track Approach to Artificial Intelligence.

 

  • SINGAPORE (Reuters) – In the not too distant future, surveillance cameras sitting atop over 100,000 lampposts in Singapore could help authorities pick out and recognize faces in crowds across the island-state. Some top officials in Singapore played down the privacy concerns. Prime Minister Lee Hsien Loong said last week that the Smart Nation project was aimed at improving people’s lives and that he did not want it done in a way “which is overbearing, which is intrusive, which is unethical”.

 

  • Google and AI Ethics: “After it emerged last month that Google was working with the Defense Department on a project for analyzing drone footage using “artificial intelligence” techniques, Google’s employees were not happy.” “(M)ore than 3,000 of the employees signed a letter to CEO Sundar Pichai, demanding that the company scrap the deal.” “Google Cloud chief Diane Greene … told employees Google was ‘drafting a set of ethical principles to guide the company’s use of its technology and products.’” “…Greene promised Google wouldn’t sign up for any further work on ‘Maven’ or similar projects without having such principles in place, and she was sorry the Maven contract had been signed without these internal guidelines having been formulated.”

 

  • House of Representatives Hearing: GAME CHANGERS: ARTIFICIAL INTELLIGENCE PART III, ARTIFICIAL INTELLIGENCE AND PUBLIC POLICY, Subcommittee on Information Technology, APRIL 18, 2018 2:00 PM, 2154 RAYBURN HOB.
  • Thompson Hine has commissioned this report, “Closing the Innovation Gap.” It’s a methodologically sound survey of almost 200 in-house folks regarding their desire for innovation in legal services and what they’re getting from outside counsel. Spoiler alert: they’re not thrilled. The report is nicely illustrated with infographics and includes a bare minimum of self-promotion.

 

  • “Axiom launches Brexit AI product to help companies update 7.5m contracts.” Details here

 

  • From Reed Smith: “European Commission outlines blockchain development plans, calls for a feasibility study and unveils FinTech Action Plan.” Among the observations in the post: “The initiative forms part of the drive towards the digital single market, a Commission strategy to boost e-commerce, modernize regulations and promote the digital economy.”

 

  • Here’s more on French President Macron’s push to make France a world leader in AI. Of course, in Europe, anything involving data will require finesse. Additional analysis here.

 

  • In this sponsored piece from Artificial Lawyer, Kira presents three use cases for it’s AI-based solutions: Brexit, GDPR and IFRS 16.

 

  • Compliance: the growing threat from money laundering and terrorist financing has required anti-money laundering legislation to become more stringent worldwide. AI can be a major part of the solution as it is able to search for unstructured data in the Deep Web and across languages much better, faster and cheaply than humans.

 

  • Squire Patton Boggs has advised Appen Limited (developer of high-quality, human-annotated datasets for machine learning and artificial intelligence) in debt financing associated with its acquisition of Leapforce, Inc. and RaterLabs, Inc.

 

  • There’s a LOT of M&A and IP legal work being generated by the big AI players, and in this article Annie Palmer expects much more to come. As I have said before, the smartest law firms should be considering use of AI in practice of law, in their businesses, and possibly establishing an AI industry group to serve the dispute, deal and IP work being generated by this industry.

 

  • New for the 2018 CES show will be the “Artificial Intelligence Marketplace.” “the destination for the latest innovations in AI infrastructure and computer systems able to perform human-intelligence tasks.”

 

  • Google’s DeepMind has recently proven to be the master of board games “generally.” Just give it the rules and it will beat all computer and human challengers — with no training. While this may be an important step toward the holy grail of General AI, Oren Etzioni, head of the Allen Institute for Artificial Intelligence, who called this an “impressive technical achievement” echoed the words of Hamlet: “There are more things in heaven and earth than are dreamt of in DeepMind’s philosophy.” To put it another way: the Google subsidiary has made a name for itself by beating humans at board games, but it is important to keep things in perspective.

 

  • News you can use: Check out the tips at iPhone J.D. My favorite of these is the text replacement feature; I type “millc” on my iPhone or iPad, and “Market Intelligence LLC” appears.

 

  • It’s Friday, so here’s a thought piece. One of the biggest criticisms of AI is it’s ‘black box” nature. That is, we know AI systems can be extraordinarily effective in making good predictions, but the inherent nature of its algorithms (e.g., neural networks) makes it very difficult to understand how and why those predictions are made. I have recently posted a couple of articles about using AI to better understand the decisions made by AI. (Is you head spinning yet?) Anyway, this article goes into some of the reasons it’s important for us to understand the how & why of AI decisions. (Not to mention the need for courts to have these questions answered when assigning liability.)

 

  • And here’s one of my favorite topics for your weekend cogitation. Can AI be conscious?
  • Junk research? Law.com has published (as a sponsored post) the results of a study by Bird & Bird called “AI: The New Wave of Legal Services.” It’s based on in-depth interviews with 15 GCs. Fifteen. That’s the sample size of one large focus group. And there’s nothing in the report to suggest that these 15 represent any sort of scientifically drawn random sample. Yet they draw conclusions such as, “among those at the forefront of innovation are the major telecom companies.” (Three telecoms’ GSs  were interviewed.) Also, “GCs in the US are more enthused by the potential of AI than in other regions.” (n<8.)

The sampling error of a survey of 15 is so large as to be meaningless. And then there’s the selection bias in what is almost certainly not a random sample. These findings may be fun to read, and may actually serve as food for thought as a qualitative/exploratory focus group, but as for faith in the conclusions?

 

  • McDermott Will & Emery showing they “get it” re AI in Health Care.

 

  • Brexit Contract Review Solution is a new collaboration between NextLaw Labs (Dentons legal tech investment vehicle) and RAVN Systems. The solution leverages RAVN’s AI technology and a bespoke algorithm co-developed with Dentons’ subject matter experts to enable high-volume contract review to pinpoint provisions that the UK secession may impact.”

 

  • If only! This article explains how AI bots can make meetings easier to set up and prep for, and in the future summarize the results, become proactive participants, and eventually, integrate intelligence across meetings. Of course, these final steps are a few years away.

 

  • Facebook is using AI algorithms (e.g., photo and video-matching technology) and 4500 employees (with plans to expand the team to 7500) to weed out terrorism-related content, according to the company’s head of global counter-terrorism policy.

 

  • “Why” is a mystery to me, but stories about the threats of AI (or lack thereof) abounded yesterday. Here’s a sampling:

Safety of AI/IoT devices: “AI systems should be safe and secure throughout their operational lifetime and verifiably so where applicable and feasible.”

Here’s a podcast interview about “the good, the bad and the ugly implications of AI and machine learning with technologist Albert Stepanyan.”

This podcast with Omar Gallaga is about AI ethics and controlling AI.

This one‘s about AI as a threat to personal autonomy.

This editorial by former US Congressman Mike Rogers outlines the “arms race” for AI power among nations.

And I couldn’t talk about AI threats without a nod to Musk’s concerns (e.g., governments “will obtain AI developed by companies at gunpoint, if necessary,”), and another rebuttal by the head of AI at Google who is “…definitely not worried about the AI apocalypse.” Meanwhile, Mark Cuban tweeted that “Autonomous weaponry is the ultimate threat to humanity,” and “Competition for AI superiority at national level most likely cause of WW3 imo.”

Here’s a rebuttal to the forecast that AI will wreck the economy and take jobs. And another, this one by Domino’s Pizza’s CEO.