• Junk research? Law.com has published (as a sponsored post) the results of a study by Bird & Bird called “AI: The New Wave of Legal Services.” It’s based on in-depth interviews with 15 GCs. Fifteen. That’s the sample size of one large focus group. And there’s nothing in the report to suggest that these 15 represent any sort of scientifically drawn random sample. Yet they draw conclusions such as, “among those at the forefront of innovation are the major telecom companies.” (Three telecoms’ GSs  were interviewed.) Also, “GCs in the US are more enthused by the potential of AI than in other regions.” (n<8.)

The sampling error of a survey of 15 is so large as to be meaningless. And then there’s the selection bias in what is almost certainly not a random sample. These findings may be fun to read, and may actually serve as food for thought as a qualitative/exploratory focus group, but as for faith in the conclusions?

 

  • McDermott Will & Emery showing they “get it” re AI in Health Care.

 

  • Brexit Contract Review Solution is a new collaboration between NextLaw Labs (Dentons legal tech investment vehicle) and RAVN Systems. The solution leverages RAVN’s AI technology and a bespoke algorithm co-developed with Dentons’ subject matter experts to enable high-volume contract review to pinpoint provisions that the UK secession may impact.”

 

  • If only! This article explains how AI bots can make meetings easier to set up and prep for, and in the future summarize the results, become proactive participants, and eventually, integrate intelligence across meetings. Of course, these final steps are a few years away.

 

  • Facebook is using AI algorithms (e.g., photo and video-matching technology) and 4500 employees (with plans to expand the team to 7500) to weed out terrorism-related content, according to the company’s head of global counter-terrorism policy.

 

  • “Why” is a mystery to me, but stories about the threats of AI (or lack thereof) abounded yesterday. Here’s a sampling:

Safety of AI/IoT devices: “AI systems should be safe and secure throughout their operational lifetime and verifiably so where applicable and feasible.”

Here’s a podcast interview about “the good, the bad and the ugly implications of AI and machine learning with technologist Albert Stepanyan.”

This podcast with Omar Gallaga is about AI ethics and controlling AI.

This one‘s about AI as a threat to personal autonomy.

This editorial by former US Congressman Mike Rogers outlines the “arms race” for AI power among nations.

And I couldn’t talk about AI threats without a nod to Musk’s concerns (e.g., governments “will obtain AI developed by companies at gunpoint, if necessary,”), and another rebuttal by the head of AI at Google who is “…definitely not worried about the AI apocalypse.” Meanwhile, Mark Cuban tweeted that “Autonomous weaponry is the ultimate threat to humanity,” and “Competition for AI superiority at national level most likely cause of WW3 imo.”

Here’s a rebuttal to the forecast that AI will wreck the economy and take jobs. And another, this one by Domino’s Pizza’s CEO.