• Facial recognition AI has been in the news and on my mind a lot lately. Of course, there are legal implications, but regardless of that aspect, these developments are a big deal of which you should be aware.

– Traveling this 4th of July? Orlando’s airport has rolled out facial recognition for all departing passengers in an attempt to speed up lines (e.g., no need to show your passport at the gate). It takes two seconds and is 99%+ accurate. (Passengers can opt out.) This story from CBS News discusses the privacy implications.

– Could this get a bit out of control? Here’s a case study: “(a)cross China, a network of 176 million surveillance cameras, expected to grow to 626 million by 2020, keeps watch on the country’s over 1.3 billion citizens.” (That’s a camera for every two people.) And, the intent is total surveillance, including inside people’s homes. “According to the official Legal Daily newspaper, the 13th Five Year Plan requires 100 percent surveillance and facial recognition coverage and total unification of its existing databases across the country. By 2020, China will have completed its nationwide facial recognition and surveillance network, achieving near-total surveillance of urban residents, including in their homes via smart TVs and smartphones.” “Soon, police and other officials will be able to monitor people’s activities in their own homes, wherever there is an internet-connected camera.”

Are they effective? Last year, “(i)t took Chinese authorities just seven minutes to locate and apprehend BBC reporter John Sudworth using its powerful network of CCTV camera and facial recognition technology.” That story here. And the case of the stolen potato here.

– “We live in a surveillance society: A U.S. citizen is reportedly captured on CCTV around 75 times per day. And that figure is even higher elsewhere in the world. Your average Brit is likely to be caught on surveillance cameras up to 300 times in the same period.” This post describes how those images can be used to spot (and even predict) crime.

This post (This Japanese AI security camera shows the future of surveillance will be automated) shows AI technology being developed in Japan to spot shoplifters and discusses the concerns about such technologies.

Facebook and others (such as Adobe) are using such recognition technologies to disrupt terrorist networks and mitigate the spread of fake news. “(T)he biggest companies extensively rely on artificial intelligence (AI). Facebook’s uses of AI include image matching. This prevents users from uploading a photo or video that matches another photo or video that has previously been identified as terrorist. Similarly, YouTube reported that 98% of the videos that it removes for violent extremism are also flagged by machine learning algorithms.”

Amazon employees (like Google’s before them) are protesting their company’s selling of such technologies to the government. Amazon workers don’t want their tech used by ICE.

Many (including me) consider this a much more benevolent identity technology: Thousands of Swedes are inserting microchips into themselves – here’s why.



  • “Mishcon de Reya has joined the ranks of law firms with high-level in-house data science capability, hiring UCL computer scientist Alastair Moore as head of analytics and machine learning.


  • From O’MelvenyFTC Seeking Input on Topics to be Explored at Public Hearings on Competition and Consumer Protection in the 21st Century. Topics include: “(t)he consumer welfare implications associated with the use of algorithmic decision tools, artificial intelligence, and predictive analytics.”


  • Here, from Digital Journal, is a discussion of the general ways law firms are using AI: Q&A: How technology is shaking up legal firms.


  • From Artificial Lawyer, Wolters Kluwer Joins Global Legal Blockchain Consortium. “The GLBC is a global network of key stakeholders in the legal industry, working toward rules for the standardisation, governance, and application of blockchain and related technologies in the global legal system. Its mission is ‘enhance the security, privacy, productivity, and interoperability of the legal technology ecosystem’.”

– More from Artificial Lawyer about Blockchain hereEY + Microsoft Enter the Blockchain IP + Royalties Sector. “Big Four firm EY and Microsoft have launched a blockchain solution for content rights and royalties management, joining a growing group of legal tech start-ups – which are operating at a much smaller scale – that have also developed similar blockchain-based IP solutions.”


  • Also from Artificial Lawyer: Global AI Governance Group: ‘AI Decisions Must Track Back to Someone’. “A newly launched AI Global Governance commission (AIGG), tasked with forming links with politicians and governments around the world to help develop and harmonise rules on the use of AI, has suggested that at least one key regulation should be that any decisions made by an AI system ‘must be tracked back to a person or an organisation’.”

This Artificial Lawyer interview with Kira’s Noah Waisberg is more than just an overview of Kira’s rapid growth; it has good insights into doc review generally.


  • Here’s a somewhat entertaining look at how law firms are engaging AI vendors. Buying AI for Law Firms: Like a Trip to the Auto Show.


  • From Lowndes, Drosdick, Doster, Kantor & Reed, P.A. via JDSupraShould Law Firms Embrace Artificial Intelligence and R&D Labs? “Change is difficult, especially in the legal market. Yet a firm’s willingness to think differently reflects its ability to adapt, to ensure sustainability for itself, and to help solve that industrywide puzzle.”


  • This article from the NYT (Is There a Smarter Path to Artificial Intelligence? Some Experts Hope So) may sound negative as to Machine Learning being over-hyped, but it positively presents other types of AI. It’s a good read.


  • Also somewhat negative is this post from MIT about the AI threat: “AI programs capable of perceiving the real world, interacting with it, and learning about it might eventually become far better at reasoning and even communicating. ‘If you solve manipulation in its fullest,’ Abbeel says, ‘you’ll probably have built something that’s pretty close to full, human-level intelligence’.”
  • From Artificial Lawyer: Thomson Reuters is again turning to AI tools, now with a contract remediation system to help companies review and repaper legal agreements ahead of Brexit. In this case it will be using AI company Logical Construct, which leverages a combination of natural language processing (NLP) and machine learning techniques to achieve its extraction results.


  • From Patent Docs: FDA Permits Marketing of First AI-based Medical Device; Signals Fast Track Approach to Artificial Intelligence.


  • SINGAPORE (Reuters) – In the not too distant future, surveillance cameras sitting atop over 100,000 lampposts in Singapore could help authorities pick out and recognize faces in crowds across the island-state. Some top officials in Singapore played down the privacy concerns. Prime Minister Lee Hsien Loong said last week that the Smart Nation project was aimed at improving people’s lives and that he did not want it done in a way “which is overbearing, which is intrusive, which is unethical”.


  • Google and AI Ethics: “After it emerged last month that Google was working with the Defense Department on a project for analyzing drone footage using “artificial intelligence” techniques, Google’s employees were not happy.” “(M)ore than 3,000 of the employees signed a letter to CEO Sundar Pichai, demanding that the company scrap the deal.” “Google Cloud chief Diane Greene … told employees Google was ‘drafting a set of ethical principles to guide the company’s use of its technology and products.’” “…Greene promised Google wouldn’t sign up for any further work on ‘Maven’ or similar projects without having such principles in place, and she was sorry the Maven contract had been signed without these internal guidelines having been formulated.”


  • House of Representatives Hearing: GAME CHANGERS: ARTIFICIAL INTELLIGENCE PART III, ARTIFICIAL INTELLIGENCE AND PUBLIC POLICY, Subcommittee on Information Technology, APRIL 18, 2018 2:00 PM, 2154 RAYBURN HOB.
  • This may be a clickbait title: “Is the NZ lawyer of the future a robot?” but the article is an interesting discussion of who will make a good lawyer in the future and how they should be trained.


  • The EU’s General Data Protection Regulation (GDPR) goes into effect in 197 days. It will have major impact on AI’s training data (among many other things). Orrick has a cool “EU GDPR Readiness Assessment Tool” to help companies prepare. (HT to @HelenaLawrence for the link.)


  • Meanwhile, as the EU is taking extraordinary steps to protect individuals’ privacy, China has deployed 20 million cameras in public places and “… in the name of public safety, the Chinese government will have cameras everywhere in every single corner that can track movements, objects and people so it can build huge database analytics to train artificial intelligence….”


  • WHAT?? To protect your privacy, Facebook suggests that you upload nude pictures of yourself. No, really, our world is getting that strange. Details here.


  • One last note about AI protecting you, On the Move Systems (OMVS) subsidiary Robotic Assistance Devices (RAD) announces “first security guard Robot deployment.”


  • This excellent piece by McKinsey echoes a couple of my favorite admonitions (e.g., don’t start by trying to fit in a way to use AI, instead start by inventorying your strategic challenges and decide whether AI might help address one or two). It’s a good primer on how an why to use AI in a commercial enterprise.


  • Bryan Cave continues to show it ‘gets it‘ regarding all this future tech stuff: “Bryan Cave Chief Innovation Officer Katie DeBord will join a panel presentation at the Forum on Legal Evolution, an invitation-only group comprised of legal innovators and early adopters, organized around a shared interest in the changing legal market.”


  • Joe Lynyak, Partner, Dorsey & Whitney, will be moderating a panel titled “Artificial Intelligence and Bank Enforcement: A Sword or a Shield?” at the Fourth Annual Federal Enforcement Forum on December 6.


  • During my presentation in DC yesterday, some of the more interesting discussion centered on AI measuring and tracking emotion, also known as “sentiment analysis,” an important aspect of any brand, including a law firm’s. Here’s a good overview of the field.


  • Legal Marketers: This is an interesting study of consumer customer loyalty, but it is relevant to law firm clients. Consider, for instance, these four loyalty determinants (they sound about right to me):

Does the experience adapt to my individual needs? Is it predictive?

Is the service available when and where I want? Is it prevalent?

Does the vendor help me to filter choice?

Does the experience delight me? Is it differentiated?


  • Finally, a few bits of news regarding the future and the whole ‘end-of-the-world’ kerfuffle.
    • AI robot and Saudi citizen Sophia says she really didn’t mean it when she threatened to “kill all humans.”
    • And, not to worry, this prognostication says all will be fine in about 30 years, as we’ll be loving life and working four-hour days.
    • In fact, thanks to AI, in only 10-20 years air travel will once again be the joy of ‘jet-setting.’
    • This article from Wharton quotes several of the best AI thinkers and suggests that while a household helper like the Jetsons’ Rosie is quite a way off, a bit of progress may appear in time for this year’s Christmas shopping. (There’s also a lot of good serious thinking in the piece and in this one.)
    • But Stephen Hawking is back in the news warning that we’d better get ready for The Singularity: “Success in creating effective AI, could be the biggest event in the history of our civilization. Or the worst. We just don’t know. So we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it.”
    • And Ray Kurzweil, one of the seminal thinkers on the subject, still believes that “intelligent machines will enhance humans, not replace us.”
    • Meanwhile, this survey by Sage reports that 43% of Americans “have no idea what AI is all about.” (My guess is that the actual number is quite a bit higher.)