• I love this “back of the envelope” answer to “Is it AI?” from Karen Hao and the MIT Technology Review.

 

 

  • This half-hour podcast from MeDermott’s Week in Health Law series addresses AI & the Practice of Medicine. Five guests participate (Terry Dee, Jiayan Chen, Kate McDonald, Dale Van Demark, Eric Fish). Topics include where AI in medicine is headed, the need for regulation, privacy, liability, insurance and more. Sorry about the audio quality.

 

  • From Buckley Sandler, here’s a brief summary of Federal Reserve Governor Lael Brainard’s remarks at the 11/13 “Fintech and New Financial Landscape” conference. “Brainard’s prepared remarks emphasize the benefits and potential risks to bank safety and consumer protection that new AI applications pose.”

 

  • AI probe hears calls for ethics code. “Calls for a code of ethics, concerns about ‘Minority Report’-style crime prediction systems and a proposal for a new legal framework governing data ownership were among the evidence presented to the second public meeting of a landmark probe into the use of algorithms in the justice system. Appearing before the Technology and Law Policy Commission last week, The Hon. Mr. Justice Knowles called for the development of an ethical and legal framework for artificial intelligence. ‘AI is going to go deeper into people’s lives than many things have before,’ he told the commission chaired by Law Society president Christina Blacklaws. ‘It is imperative that we take the opportunity for law and ethics to travel with it.’” Here’s the brief summary from The Law Society Gazette.

 

  • Also from The Law Society GazetteEmbrace technology before your business model is threatened, Welsh firms told. This article summarizes “a seminar held by the National Assembly for Wales on the challenges presented by artificial intelligence and automation to legal services….”

 

  • Greenberg Traurig’s Paul Ferrillo posted this along with SDI Cyber’s George PlatsisQuantum Computing to Protect Data: Will You Wait and See or Be an Early Adopter? “So while we are still very much in the “zone of the unknown” a word of advice: if you’re a data-heavy organization and you plan to use and keep that data for years to come, you need to start thinking about new and alternate forms of encryption today.”

 

  • Here’s another thought piece from Mark A. CohenWhat Are Law Schools Training Students For? “Law is entering the age of the consumer and bidding adieu to the guild that enshrined lawyers and the myth of legal exceptionalism. That’s good news for prospective and existing legal consumers.” He addresses the challenges this will present for law schools. “These changes are affecting what it means to ‘think like a lawyer’ and, more importantly, what skills “legal” skills are required in today’s marketplace.”

 

 

  • From MobiHealthNews: Roundup: 12 healthcare algorithms cleared by the FDA. “As AI cements its role in healthcare, more and more intelligent software offerings are pursuing 510(k) and De Novo approvals.” Each of the 12 are summarized here.

 

  • Here (Geek comes of age), Joanna Goodman provides a good summary of this year’s Legal Geek event and the look into the future presented.

 

  • This story appeared in the New Hampshire Union LeaderIs Alexa’s speech protected? “The proper police procedures for searching a home or briefcase have been hammered out through decades of case law. But the question of how easy it should be for police to access the vast troves of data collected by the so-called internet of things — devices like Amazon’s Echo, Google’s Home, smart toasters, and other household objects equipped with sensors and connected to networks — is far from settled law.” Several credible sources are cited for opinion.

 

  • Ashley Deeks of the University of Virginia Law School penned: Artificial Intelligence and the Resort to Force. “How will AI change the way states make decisions about when to resort to force under international law? Will the use of AI improve or worsen those decisions? What should states take into account when determining how to use AI to conduct their jus ad bellum analyses?” This post is an overview of a larger article by Deeks and two colleagues.

 

  • Eamonn Moran of Kilpatrick Townsend & Stockton wrote: A Regulator’s Assessment of the Impact of Artificial Intelligence on Financial Services. The Federal Reserve Board’s “Fintech working group is working across the Federal Reserve System ‘to take a deliberate approach to understanding the potential implications of AI for financial services, particularly as they relate to our responsibilities.'” Here’s the full post.

 

  • Former Lehman lawyer predicts big role for AI post-Brexit. “A former senior counsel for Lehman Brothers investment bank and artificial intelligence (AI) technology pioneer has predicted AI will be crucial in helping companies alter thousands of contracts rapidly in the aftermath of Brexit. Beth Anisman, former global chief administrative officer in Lehman’s legal compliance and audit department, who subsequently co-founded an AI company, Apogee Legal – recently sold to e-discovery giant Seal Software – said disentangling contracts following Lehman’s collapse 10 years ago would have benefited from AI.” More here.

 

Blockchain

  • From BlockTribuneBlockchain, AI and the Legal System – Will Tech Lead The Law? “What happens when technologies, such as deep learning software and self-enforcement code, lead legal decisions? How can one ensure that next-generation legal technology systems are not unfairly biased against certain groups or individuals? And what skills will lawyers need to develop in order to properly assess the quality of justice that flows from data-driven decisions?” “Mireille Hildebrandt, professor at the law, science, technology and society research group at Vrije Universiteit Brussels in Belgium, will formulate and respond to during a five-year project to investigate the implications of what she calls ‘computational law’.” More here.

 

  • K&L Gates has posted Volume 38 of it’s Blockchain Energizer Energy Alert, this time summarizing three recent developments.

 

  • West Virginia Secretary of State Reports Successful Blockchain Voting in 2018 Midterm Elections. “…(I)n the 2018 midterm elections, 144 military personnel stationed overseas from 24 counties were able to cast their ballots on a mobile, blockchain-based platform called Voatz….” More here.

The biggest story in AI this week is the launch in China of an AI (“Digital Human”) news reader/anchor person. It’s certainly not Uncle Walter, but at first glance it’s pretty convincing. “The Chinese AI anchor man looks very much like the average Chinese citizen, a typical Chinese guy with that oddly intellectual look. He looks reassuring, made for his market like most news readers’ images are supposed to be.” Coverage here, here, here and video here. “There’s fake news, and then there’s fake people doing the news.”

In related news, Microsoft has developed AI that goes beyond the now well-established systems that write news articles. “Condensing paragraphs into sentences isn’t easy for artificial intelligence (AI). That’s because it requires a semantic understanding of the text that’s beyond the capabilities of most off-the-shelf natural language processing models. But it’s not impossible, as researchers at Microsoft recently demonstrated.”

 

  • Read this post from Artificial Lawyer. It provides some excellent insights from the heads of legal departments in some major corporations as to where the industry is headed and why. Legal Is Not ‘Special’ – Key Message of TR Legal Tech Procurement Event.

 

  • Artificial Lawyer (AL) has begun to do product reviews. The first company to be reviewed is Kira Systems, and here is the link. It’s not actually a link to a review, but rather a call for users to review the product according to specified criteria which will then be reported. Cool.

 

More posts from Artificial Lawyer:

– BCLP Launches ML Early Dispute Evaluation Service. “Clear/Cut harnesses the firm’s award-winning in-house forensic technology capability.” More here.

– Big Data Startup Concirrus Wins Norton Rose InsurTech Prize. Details here.

– Using AI Contract Analysis to Prepare for Brexit – Seal Software. More of this sponsored post here.

 

  • Blank Rome publishedWill “Leaky” Machine Learning Usher in a New Wave of Lawsuits? in RAIL: The Journal of Robotics, Artificial Intelligence & Law. “…(I)t seems all but inevitable that some of those (AI) systems will create unintended and unforeseen consequences, including harm to individuals and society at large.”

 

  • Law.com posted this news from Byran Cave: New Data Analysis Service Could Help In-House Clients See the Future. “…Clear/Cut leverages predictive coding and machine learning to comb through massive amounts of data and pluck out key information for legal analysts, who use the data to recommend whether clients should settle or forge ahead with litigation.” More here.

 

 

  • From Laura H. Phillips of DrinkerThe FCC Wades into the Artificial Intelligence (AI), Machine Learning Pool. ” Federal Communications Commission Chairman Ajit Pai issued a Public Notice announcing a first ever FCC Forum focusing on artificial intelligence (AI) and machine learning. This Forum will convene at FCC headquarters on November 30.”

 

  • This, from Jonathan BockmanRudy Y. Kim, and Anna Yuan of MoFo: Patenting Artificial Intelligence in the U.S. – Considerations for AI Companies. “…(C)ertain AI technologies can face increased scrutiny at the U.S. Patent and Trademark Office (USPTO) with respect to whether the invention is directed to patent-eligible subject matter.”

 

  • James M. Beck of ReedSmith publishedThe Diagnostic Artificial Intelligence Speedbump Nobody’s Mentioning. This is a very interesting and thorough treatment of the FDA’s regulations and the need for more.

 

  • Canada’s Torys published: Software As Medical Devices And Digital Health In Canada: What’s Next? Link here.

 

  • From Pillsbury’s Ashley E. CowgillArtificial Intelligence: A Grayish Area for Insurance Coverage. Download here from The Journal of Robotics, Artificial Intelligence & Law Vol. 2, No. 1.

 

  • Here’s an interesting post by Ian Connett of QuantumJuristA Future of J.D. Advantage Jobs? (“J.D. Advantage” jobs are those for which a law degree is strongly preferred, but not necessarily required.) As you might expect, the answer is “yes”, and the specific examples he presents are interesting.

 

  • “Amazon Web Services (AWS), Amazon’s on-demand cloud computing subsidiary, was partially HIPAA eligible — AWS customers could use Polly, SageMaker, Rekognition, and dozens of the platform’s other offerings to process health information. But Translate, Comprehend, and Transcribe remained notable holdouts — until now, that is. As of this week, all three comply with HIPAA.” Story from Venture Beat here.

 

  • Dentons has published this Market Insights volume titled: Digital Transformation and the Digital Consumer. There’s a chapter on AI and much of the content is AI-related. There’s a video excerpt here.

 

  • LeClairRyan has published Airplanes and Artificial Intelligence Parts I and II. “…(A)pplications for AI in aviation and its effect on the legal liability and regulation of those who use it.”

 

  • From Hogan Lovells, here’s a link to download Artificial Intelligence and your business: A guide for navigating the legal, policy, commercial, and strategic challenges ahead.

 

  • Milena Higgins of Black Hills is the guest on this episode of Legal Talk Network’s “Legal Toolkit”: Robot Takeover: How Automation Makes Law Practice Easier.

 

  • Here’s Part 4 of Mintz’ Strategies To Unlock AI’s Potential In Health Care, Part 4: How And When Will Congress Act?

 

  • At two events in the past 30 days I’ve been part of discussions about law firms acquiring tech companies. Here’s an example: Singapore law firm Rajah & Tann acquires e-discovery startup LegalComet.

 

  • “Nalytics, is working with Strathclyde University’s Law School post-graduate students on a new project dedicated to promoting digital transformation in legal education. By providing free access to the Nalytics search and discovery platform to students on the Diploma in Professional Legal Studies, the project aims to help students develop a greater understanding of legal technology and more importantly, its applications in tackling a range of big data problems.” Story here.

 

  • This article from S&P Global Platts (Commodity market AI applications are emerging along with new risks) cites partners at several prominent law firms among others. “Artificial intelligence and smart contract technology like blockchain are slowly being adopted by commodity markets, creating opportunities to streamline trading and other functions, but not without introducing challenges and risks experts said Thursday.”

 

  • Exterro has issued the results of another survey. (2018 In-house Legal Benchmarking Report. There’s a link here.) All that is presented regarding the methodology is “…with over 100 respondents (more than ever before), this year’s report surveys a wider distribution of companies, including more from organizations of fewer than 25,000 people than in the past.” So, I’m assuming there are 101 respondents, making the typical margin of error error about +/-10%. Given the wide range of company sizes (1 to 250,000+ employees) and the fact most fall into one size category (1,000-25,000 employees), I don’t see how there can be much useful information anywhere in the report. Law.com talks about it (without regard to the methodology) here.

 

  • Here’s another industry survey. (The Blickstein Group’s 10th Annual Law Department Operations Survey.) This one has 128 respondents this year, but reports data back to 2008 when they had only 34 respondents. This year’s stats are probably accurate +/-9% which means that many of the differences reported are actually in a statistical tie, and the prior year data with very small samples should be ignored. Above the Law includes a summary by Brad Blickstein here without comment on its methodology. When combined with the included content by vendors and law firms, I see this study as the equivalent of an interesting focus group — just don’t take the statistics seriously.

 

  • I find it interesting that this post from Kyocera BRANDVOICE in Forbes (Can The Right Office Equipment Improve Our Legal Culture?) has a section on AI. They include AI as “equipment-related”.

 

  • Here, from the New York Times DealBook is a thorough examination of the bias present in today’s artificial intelligence:  AI: The Commonality of A.I. and Diversity. (It’s written by Alina Tugend)

 

Blockchain

  • This, from ContractWorks: Are Your Contracts in Chaos? Get Organized with These 4 Tips.

 

 

Also from Artificial Lawyer:

Smart Contract Pioneer OpenLaw Goes Open Source. Story here.

  • From Artificial Lawyer: Thomson Reuters is again turning to AI tools, now with a contract remediation system to help companies review and repaper legal agreements ahead of Brexit. In this case it will be using AI company Logical Construct, which leverages a combination of natural language processing (NLP) and machine learning techniques to achieve its extraction results.

 

  • From Patent Docs: FDA Permits Marketing of First AI-based Medical Device; Signals Fast Track Approach to Artificial Intelligence.

 

  • SINGAPORE (Reuters) – In the not too distant future, surveillance cameras sitting atop over 100,000 lampposts in Singapore could help authorities pick out and recognize faces in crowds across the island-state. Some top officials in Singapore played down the privacy concerns. Prime Minister Lee Hsien Loong said last week that the Smart Nation project was aimed at improving people’s lives and that he did not want it done in a way “which is overbearing, which is intrusive, which is unethical”.

 

  • Google and AI Ethics: “After it emerged last month that Google was working with the Defense Department on a project for analyzing drone footage using “artificial intelligence” techniques, Google’s employees were not happy.” “(M)ore than 3,000 of the employees signed a letter to CEO Sundar Pichai, demanding that the company scrap the deal.” “Google Cloud chief Diane Greene … told employees Google was ‘drafting a set of ethical principles to guide the company’s use of its technology and products.’” “…Greene promised Google wouldn’t sign up for any further work on ‘Maven’ or similar projects without having such principles in place, and she was sorry the Maven contract had been signed without these internal guidelines having been formulated.”

 

  • House of Representatives Hearing: GAME CHANGERS: ARTIFICIAL INTELLIGENCE PART III, ARTIFICIAL INTELLIGENCE AND PUBLIC POLICY, Subcommittee on Information Technology, APRIL 18, 2018 2:00 PM, 2154 RAYBURN HOB.
  • Thompson Hine has commissioned this report, “Closing the Innovation Gap.” It’s a methodologically sound survey of almost 200 in-house folks regarding their desire for innovation in legal services and what they’re getting from outside counsel. Spoiler alert: they’re not thrilled. The report is nicely illustrated with infographics and includes a bare minimum of self-promotion.

 

  • “Axiom launches Brexit AI product to help companies update 7.5m contracts.” Details here

 

  • From Reed Smith: “European Commission outlines blockchain development plans, calls for a feasibility study and unveils FinTech Action Plan.” Among the observations in the post: “The initiative forms part of the drive towards the digital single market, a Commission strategy to boost e-commerce, modernize regulations and promote the digital economy.”

 

  • Here’s more on French President Macron’s push to make France a world leader in AI. Of course, in Europe, anything involving data will require finesse. Additional analysis here.

 

  • In this sponsored piece from Artificial Lawyer, Kira presents three use cases for it’s AI-based solutions: Brexit, GDPR and IFRS 16.

 

  • Compliance: the growing threat from money laundering and terrorist financing has required anti-money laundering legislation to become more stringent worldwide. AI can be a major part of the solution as it is able to search for unstructured data in the Deep Web and across languages much better, faster and cheaply than humans.

 

  • Squire Patton Boggs has advised Appen Limited (developer of high-quality, human-annotated datasets for machine learning and artificial intelligence) in debt financing associated with its acquisition of Leapforce, Inc. and RaterLabs, Inc.

 

  • There’s a LOT of M&A and IP legal work being generated by the big AI players, and in this article Annie Palmer expects much more to come. As I have said before, the smartest law firms should be considering use of AI in practice of law, in their businesses, and possibly establishing an AI industry group to serve the dispute, deal and IP work being generated by this industry.

 

  • New for the 2018 CES show will be the “Artificial Intelligence Marketplace.” “the destination for the latest innovations in AI infrastructure and computer systems able to perform human-intelligence tasks.”

 

  • Google’s DeepMind has recently proven to be the master of board games “generally.” Just give it the rules and it will beat all computer and human challengers — with no training. While this may be an important step toward the holy grail of General AI, Oren Etzioni, head of the Allen Institute for Artificial Intelligence, who called this an “impressive technical achievement” echoed the words of Hamlet: “There are more things in heaven and earth than are dreamt of in DeepMind’s philosophy.” To put it another way: the Google subsidiary has made a name for itself by beating humans at board games, but it is important to keep things in perspective.

 

  • News you can use: Check out the tips at iPhone J.D. My favorite of these is the text replacement feature; I type “millc” on my iPhone or iPad, and “Market Intelligence LLC” appears.

 

  • It’s Friday, so here’s a thought piece. One of the biggest criticisms of AI is it’s ‘black box” nature. That is, we know AI systems can be extraordinarily effective in making good predictions, but the inherent nature of its algorithms (e.g., neural networks) makes it very difficult to understand how and why those predictions are made. I have recently posted a couple of articles about using AI to better understand the decisions made by AI. (Is you head spinning yet?) Anyway, this article goes into some of the reasons it’s important for us to understand the how & why of AI decisions. (Not to mention the need for courts to have these questions answered when assigning liability.)

 

  • And here’s one of my favorite topics for your weekend cogitation. Can AI be conscious?
  • Junk research? Law.com has published (as a sponsored post) the results of a study by Bird & Bird called “AI: The New Wave of Legal Services.” It’s based on in-depth interviews with 15 GCs. Fifteen. That’s the sample size of one large focus group. And there’s nothing in the report to suggest that these 15 represent any sort of scientifically drawn random sample. Yet they draw conclusions such as, “among those at the forefront of innovation are the major telecom companies.” (Three telecoms’ GSs  were interviewed.) Also, “GCs in the US are more enthused by the potential of AI than in other regions.” (n<8.)

The sampling error of a survey of 15 is so large as to be meaningless. And then there’s the selection bias in what is almost certainly not a random sample. These findings may be fun to read, and may actually serve as food for thought as a qualitative/exploratory focus group, but as for faith in the conclusions?

 

  • McDermott Will & Emery showing they “get it” re AI in Health Care.

 

  • Brexit Contract Review Solution is a new collaboration between NextLaw Labs (Dentons legal tech investment vehicle) and RAVN Systems. The solution leverages RAVN’s AI technology and a bespoke algorithm co-developed with Dentons’ subject matter experts to enable high-volume contract review to pinpoint provisions that the UK secession may impact.”

 

  • If only! This article explains how AI bots can make meetings easier to set up and prep for, and in the future summarize the results, become proactive participants, and eventually, integrate intelligence across meetings. Of course, these final steps are a few years away.

 

  • Facebook is using AI algorithms (e.g., photo and video-matching technology) and 4500 employees (with plans to expand the team to 7500) to weed out terrorism-related content, according to the company’s head of global counter-terrorism policy.

 

  • “Why” is a mystery to me, but stories about the threats of AI (or lack thereof) abounded yesterday. Here’s a sampling:

Safety of AI/IoT devices: “AI systems should be safe and secure throughout their operational lifetime and verifiably so where applicable and feasible.”

Here’s a podcast interview about “the good, the bad and the ugly implications of AI and machine learning with technologist Albert Stepanyan.”

This podcast with Omar Gallaga is about AI ethics and controlling AI.

This one‘s about AI as a threat to personal autonomy.

This editorial by former US Congressman Mike Rogers outlines the “arms race” for AI power among nations.

And I couldn’t talk about AI threats without a nod to Musk’s concerns (e.g., governments “will obtain AI developed by companies at gunpoint, if necessary,”), and another rebuttal by the head of AI at Google who is “…definitely not worried about the AI apocalypse.” Meanwhile, Mark Cuban tweeted that “Autonomous weaponry is the ultimate threat to humanity,” and “Competition for AI superiority at national level most likely cause of WW3 imo.”

Here’s a rebuttal to the forecast that AI will wreck the economy and take jobs. And another, this one by Domino’s Pizza’s CEO.