• Easily the most reported AI story in the past few days has been: UK & France sign major deal for AI & Cyber Security cooperation. “The plan will see the UK’s national institute for data science and artificial intelligence, The Alan Turing Institute, partner with French counterpart DATAIA on research and funding initiatives.” Coverage here, here and here.

This post from Artificial Lawyer takes an in-depth look at the British Government’s support of legal tech per se.

– Another perspective on the UK’s attitudes and uses of tech in law is presented in this post: “Imagine Y Combinator of the tech world fame, but inside a law firm with open access to the partners, associates, technology infrastructure, and staff.” “…(S)ince the passage of the 2007 Legal Services Act, which allows non-attorney ownership of law firms (also known as an ABS structure) there has been an explosion in innovative business models combining great lawyers, business professionals, and technologists (think Riverview Law). UK Biglaw also seems to get that times are changing.”

 

  • But, while the UK and France are planning to become more competitive in AI, here, from CB Insights are the nine companies who have been acquiring AI startups:

 

  • Also from Europe: “Researchers from the European University Institute have developed a tool designed to use artificial intelligence to scan companies’ privacy policies to identify violations of data protection laws….”

 

  • Singapore’s government is also committed to investment in AI, in this case, specifically legal AI. “The city-state’s government has established programs to advance innovation in the legal profession. Can Hong Kong, Asia’s other financial center, catch up?”

 

  • Here, from Thomson Reuters, are more thoughts about use of AI in smaller firms. “So, what can a small law firm attorney do to gain the insights of large law firms and more experienced attorneys? The answer is artificial intelligence.” Much of this post is taken from their eBook, “Not All Legal AI Is Created Equal.

 

  • One of the ways AI is expected to become available to smaller organizations is ‘AI as a service’ (AIaaS). This area is expecting rapid growthWorldwide Artificial Intelligence as a Service (AIaaS) Market 2018-2023: A $10.88 Billion Opportunity – ResearchAndMarkets.com.

 

  • Gowling has come up with an interesting way to get its employees comfortable with blockchain. “International law firm Gowling WLG has introduced a new blockchain-based peer-to-peer recognition scheme for 1,178 UK employees working at its London and Birmingham offices. The Gowling WLG Reward Token scheme (GRT), launched on 2 July 2018, was designed internally to educate employees about blockchain technology and help staff to earn and share rewards.” More here.

 

  • This (Vera Cherepanova: AI doesn’t solve ethical dilemmas, it exposes them) is an interesting read and unusual perspective about AI and ethics, focused on compliance applications and implications.

 

  • Here’s more coverage of China becoming a ‘surveillance state’. “In some cities, cameras scan train stations for China’s most wanted. Billboard-sized displays show the faces of jaywalkers and list the names of people who can’t pay their debts. Facial recognition scanners guard the entrances to housing complexes. Already, China has an estimated 200 million surveillance cameras — four times as many as the United States. Such efforts supplement other systems that track internet use and communications, hotel stays, train and plane trips and even car travel in some places.” The specific examples are fascinating.
  • Here’s a follow-up to yesterday’s deep dive into facial recognition. Following protests from “rights and privacy groups” such as the ACLU, Orlando Police End Test Of Amazon’s Real-Time Facial ‘Rekognition’ System.

And this post about smart cities (How Cities Are Getting Smart Using Artificial Intelligence) includes a few thoughts about AI, facial recognition and privacy. “There will be 50 billion devices connected by 2020 including a billion cameras–all feeding data to artificial intelligence platforms. Perhaps you’ve noticed the marked improvement in facial recognition on Facebook this year. Police in Shenzhen are already ticketing jaywalkers using facial recognition. We are approaching radical transparency where every search, every move, every test informs a merchant, authority, or insurer. Want to preserve any privacy? That will take some new policies.”

 

  • How ARM Is Using Artificial Intelligence To Supercharge Its Patents. “When it came to doing the due diligence on ARM’s patents, the usually long-and-labourious process that can last for weeks on end, sped through in just a couple of days. The reason was software powered by artificial intelligence that could read documents at light speed, compared to humans. Both Softbank and the law firm Slaughter & May, which represented ARM during the deal, used the same AI-based tool to trawl through both ARM’s and Softbank’s patent portfolios.” More here.

 

  • From Artificial Lawyer:

– Legal Data Talent War Heats Up With Kennedys Hires.

– AI: Moving Legal Research + Innovation Forward. This post is a review of a presentation by Anand Upadhye of Casetext and Nina Jack of Fastcase. Among the comments, “By the end of 2019, Microsoft is aiming to move 90% of the company’s legal work to alternative fee arrangements.”

 

  • Lately, Above the Law has been more AI-focused than usual. Here are a couple such worthwhile posts:

– AI And The Practice Of Law: Realizing Value. “How does one reasonably prioritize and choose which projects to pursue and which vendors to work with?” This is not the only approach to such choices, but is one worth considering.

The Bleeding Edge Of Law: The long aversion to leveraging capital in law is changing fast. “(A)s a result of a number of factors — including the equity structure of law firms, the difficulty in measuring and quantifying legal risk, and the ability of underwriters to understand both finance and law — law had previously seemed immune to using capital as a force for positive change. That is changing fast, as capital is applied to litigation and a growing array of other areas of the law.” This is mainly a look back, but it sets the stage for profound change.

 

  • Blockchain – fFrom Perkins CoieBlockchain in Review – Weeks of May 7th through May 25th, 2018. Hearings in the House, before the Federal Reserve and SEC are reported.
  • Facial recognition AI has been in the news and on my mind a lot lately. Of course, there are legal implications, but regardless of that aspect, these developments are a big deal of which you should be aware.

– Traveling this 4th of July? Orlando’s airport has rolled out facial recognition for all departing passengers in an attempt to speed up lines (e.g., no need to show your passport at the gate). It takes two seconds and is 99%+ accurate. (Passengers can opt out.) This story from CBS News discusses the privacy implications.

– Could this get a bit out of control? Here’s a case study: “(a)cross China, a network of 176 million surveillance cameras, expected to grow to 626 million by 2020, keeps watch on the country’s over 1.3 billion citizens.” (That’s a camera for every two people.) And, the intent is total surveillance, including inside people’s homes. “According to the official Legal Daily newspaper, the 13th Five Year Plan requires 100 percent surveillance and facial recognition coverage and total unification of its existing databases across the country. By 2020, China will have completed its nationwide facial recognition and surveillance network, achieving near-total surveillance of urban residents, including in their homes via smart TVs and smartphones.” “Soon, police and other officials will be able to monitor people’s activities in their own homes, wherever there is an internet-connected camera.”

Are they effective? Last year, “(i)t took Chinese authorities just seven minutes to locate and apprehend BBC reporter John Sudworth using its powerful network of CCTV camera and facial recognition technology.” That story here. And the case of the stolen potato here.

– “We live in a surveillance society: A U.S. citizen is reportedly captured on CCTV around 75 times per day. And that figure is even higher elsewhere in the world. Your average Brit is likely to be caught on surveillance cameras up to 300 times in the same period.” This post describes how those images can be used to spot (and even predict) crime.

This post (This Japanese AI security camera shows the future of surveillance will be automated) shows AI technology being developed in Japan to spot shoplifters and discusses the concerns about such technologies.

Facebook and others (such as Adobe) are using such recognition technologies to disrupt terrorist networks and mitigate the spread of fake news. “(T)he biggest companies extensively rely on artificial intelligence (AI). Facebook’s uses of AI include image matching. This prevents users from uploading a photo or video that matches another photo or video that has previously been identified as terrorist. Similarly, YouTube reported that 98% of the videos that it removes for violent extremism are also flagged by machine learning algorithms.”

Amazon employees (like Google’s before them) are protesting their company’s selling of such technologies to the government. Amazon workers don’t want their tech used by ICE.

Many (including me) consider this a much more benevolent identity technology: Thousands of Swedes are inserting microchips into themselves – here’s why.

 

 

  • “Mishcon de Reya has joined the ranks of law firms with high-level in-house data science capability, hiring UCL computer scientist Alastair Moore as head of analytics and machine learning.

 

  • From O’MelvenyFTC Seeking Input on Topics to be Explored at Public Hearings on Competition and Consumer Protection in the 21st Century. Topics include: “(t)he consumer welfare implications associated with the use of algorithmic decision tools, artificial intelligence, and predictive analytics.”

 

  • Here, from Digital Journal, is a discussion of the general ways law firms are using AI: Q&A: How technology is shaking up legal firms.

 

  • From Artificial Lawyer, Wolters Kluwer Joins Global Legal Blockchain Consortium. “The GLBC is a global network of key stakeholders in the legal industry, working toward rules for the standardisation, governance, and application of blockchain and related technologies in the global legal system. Its mission is ‘enhance the security, privacy, productivity, and interoperability of the legal technology ecosystem’.”

– More from Artificial Lawyer about Blockchain hereEY + Microsoft Enter the Blockchain IP + Royalties Sector. “Big Four firm EY and Microsoft have launched a blockchain solution for content rights and royalties management, joining a growing group of legal tech start-ups – which are operating at a much smaller scale – that have also developed similar blockchain-based IP solutions.”

 

  • Also from Artificial Lawyer: Global AI Governance Group: ‘AI Decisions Must Track Back to Someone’. “A newly launched AI Global Governance commission (AIGG), tasked with forming links with politicians and governments around the world to help develop and harmonise rules on the use of AI, has suggested that at least one key regulation should be that any decisions made by an AI system ‘must be tracked back to a person or an organisation’.”

This Artificial Lawyer interview with Kira’s Noah Waisberg is more than just an overview of Kira’s rapid growth; it has good insights into doc review generally.

 

  • Here’s a somewhat entertaining look at how law firms are engaging AI vendors. Buying AI for Law Firms: Like a Trip to the Auto Show.

 

  • From Lowndes, Drosdick, Doster, Kantor & Reed, P.A. via JDSupraShould Law Firms Embrace Artificial Intelligence and R&D Labs? “Change is difficult, especially in the legal market. Yet a firm’s willingness to think differently reflects its ability to adapt, to ensure sustainability for itself, and to help solve that industrywide puzzle.”

 

  • This article from the NYT (Is There a Smarter Path to Artificial Intelligence? Some Experts Hope So) may sound negative as to Machine Learning being over-hyped, but it positively presents other types of AI. It’s a good read.

 

  • Also somewhat negative is this post from MIT about the AI threat: “AI programs capable of perceiving the real world, interacting with it, and learning about it might eventually become far better at reasoning and even communicating. ‘If you solve manipulation in its fullest,’ Abbeel says, ‘you’ll probably have built something that’s pretty close to full, human-level intelligence’.”
  • Toronto Startup Blue J Legal Uses AI to Help Predict How Courts will Rule on Employment Law Cases. Details here. (A lot of legal AI is centered in Toronto.)

 

  • “On 30 May 2018, the European Patent Office (EPO) held a ‘first of its kind’ (as it was called by one of the EPO officials) conference on ‘Patenting Artificial Intelligence’.” A summary of the proceedings can be found here. “The main thread connecting all the sessions was the common understanding that AI provides an immense opportunity for innovation. Moreover, the patent system must respond adequately in order to ensure that it does not stifle but enhance such innovation. The main goal of the event was therefore to facilitate a discussion about the challenges in patenting AI.”

 

  • Here‘s the report of some interesting research conducted by Gowling. Among their conclusions, “(d)iscussions concerning pitfalls for the development of the blockchain family were dominated by a single point – its association with Bitcoin.”

The research should be considered qualitative and exploratory as all they say about the methodology is “The research was conducted by BizWord Ltd (www.bizword.co.uk), an independent business consultancy. Specific sources have been listed in the report. To compile the report, we undertook: • A quantitative, online survey, which was sent to FinTech experts in businesses headquartered around the world. • In-depth interviews with a panel of experts during early 2018. • Desktop research and analysis of publicly-available information, industry studies and forecasts.

 

  • Press release: Simpson Thacher Partners with Columbia Business School to Enhance Incoming Associate Training. “Olga Gutman, Co-Chair of the Firm’s Attorney Development Committee, said, ‘The use of artificial intelligence is increasing the efficiency of legal work, particularly that of junior associates. We need to prepare our new associates to perform more advanced legal work for our clients from day one’.”

 

  • From McLane Middleton: United States: Everything Is Not Terminator: Using State Law Against Deceptive AI’s Use Of Personal Data.“Although killer drones and autonomous weapons get the most publicity when it comes to the dangers of artificial intelligence (“AI”),1 there is growing evidence of the dangers posed by AI that can deceive human beings. A few examples from recent headlines (click here for footnote references and the rest of the post):
    • AI that can create videos of world leaders—or anyone— saying things they never said; 2
    • Laser phishing, which uses AI to scan an individual’s social media presence and then sends “false but believable” messages from that person to his or her contacts, possibly obtaining money or personal information; 3 and
    • AI that analyzes data sets containing millions of Facebook profiles to create marketing strategies to “predict and potentially control human behavior.” 4

 

  • From Above the Law5 Myths About Litigation Finance. “On an episode of The Good Wife, a litigation financier had a computer that would take in data about a case and immediately spit out the numbers and parameters for funding. But that’s TV, not the real world. … (W)hen it comes to the underwriting process for litigation finance, humans remain firmly in control.”

 

  • About surveillance and your privacyIt’s RoboCop-ter: Boffins build drone to pinpoint brutal thugs in crowds. (88% accuracy.) “The use of AI for surveillance is concerning. Similar technologies involving actual facial recognition, such as Amazon’s Rekognition service, have been employed by the police. These systems often suffer from high false positives, and aren’t very accurate at all, so it’ll be a while before something something like this can be combined with drones.”
  • Kudos to Suffolk University’s law school and to Jordan Furlong for his contribution to today’s launch of Suffolk University Law School’s Legal Innovation & Technology Certificate Program. “There are six courses in the program, each delivered by an experienced legal practitioner or industry analyst who delivers ten full hours of information, instruction, and insight into the course’s subject matter.” One of the first two courses is Jordan’s 21st Century Legal Services,“You’ll learn critical market insights and strategic and tactical recommendations for operating a law firm or legal services business. The coursework will focus on the current upheaval in the market and how to compete successfully in the new legal services landscape to come.”

 

  • I’ve been encouraging law firms to start using chatbots on their websites and other client interface situations. Norton Rose has launched one called “Parker” to assist “people who have questions about the European Union data protection law, the General Data Protection Regulation (GDPR).”

 

  • There has been a decent amount of discussion lately about lawyers in the age of AI perhaps needing to become much more technical, even to the point of even learning to write programming code. Here’s an interesting post by Sooraj Shah that discusses how far down this path lawyers need to go. He uses the Big Four as something of a touchstone.

 

  • Here’s an interesting podcast from ALM: Lawyers, Fear Not the Smart Contract. Contributors include: assistant clinical professor of law at Cardozo Law School, and co-founder of the Open Law smart contracts project; CEO of Monax; co-founder of blockchain company Kadena and the lead architect of its Pact smart contracts language; and a transactional attorney from Loeb & Loeb.

 

  • University of Florida Levin College of Law publishes Antitrust & Competition Policy Blog. Today’s post by D. Daniel Sokol, Professor of Law is, “Prediction Machines: The Simple Economics of AI” featuring Avi Goldfarb and Ajay Agrawal.

 

  • Firms using AI solutions:

– “To streamline its due diligence processes, Maddocks has signed on to deploy the Luminance AI platform.”

Brodies too: “Brodies deploys artificial intelligence technology from Luminance.”

  • From The Hill: “A growing number of Democratic lawmakers and civil libertarians are voicing concerns about Amazon’s facial recognition software (Rekognition), worrying that it could be misused. They fear that without proper oversight the technology could hurt minority or poor communities and allow police to ramp up surveillance.”
  • I really like this post by Tony Joyner, a partner in Herbert Smith Freehills’ Perth office. (The Inevitable Surprise: How Technology Will Change What We Do.) It provides excellent insights underscored by interesting quotes from the likes of Jack Welch, Pablo Picasso and the British Post Office.

 

  • Make time for this insightful and entertaining 49-minute panel discussion from Stanford Law about getting lawyers and law firms to embrace AI and other technologies, particularly to facilitate legal research; it features Jake Heller, Patrick DiDomenico, Jean O’Grady, Marlene Gebauer, and Jeffrey Rovner. The title of one of my friend Jeff’s slides is “The Waterfall of Tears, The sad problem of software adoption in law firms.” I love it.

 

  • Click here for The 2018 Aderant Business of Law and Legal Technology Survey.

NOTE: Starting today, when I post the results of any survey, I will also present a very top line analysis of how seriously you should take the results. I have a Ph.D. in market research from the University of North Carolina, and know what I’m talking about in this regard. Few of my readers have any technical training in statistics, so it is easy for them to be misled by the findings of research based on poor methodology. For some of my basic thoughts on this subject, check out the very first post I made on this blog.

Regarding the Aderant study, there are only 138 respondents; and it’s an email survey, so the response rate was almost certainly very low; therefore, the findings should be considered “suggestive” and “directional” rather than definitive. (Reporting findings with decimal point detail is very misleading when the margin of sampling error is about +/-10%.) When it is reported that “the top challenges facing law firms are: pricing (36%), cybersecurity (33%, operational efficiency (32%), technology adoption (30%) and competition (26%),” those should be considered to be in a statistical tie, and year-to-year changes of less than ten percent should be considered likely do to sampling error. With a sample this small, any findings based on segmentation (e.g., “a closer look by firm size”) are likely spurious.

 

  • And here’s another survey (The Littler Annual Employer Survey, 2018). This one has 1111 respondents. That’s a solid number of participants, so making the BIG assumption that the response rate was high (and therefore non-response bias is not of huge importance), we can be 95% confident that the percentages reported are accurate at least at the +/- 2.9 percent level. Nice.

Among the findings: “Recruiting and hiring is the most common use of advanced data analytics and artificial intelligence, adopted by 49 percent of survey respondents. Employers also said they were using big data to guide HR strategy and employee management decisions (31 percent), analyze workplace policies (24 percent) and automate tasks previously performed by humans (22 percent). The smallest group of participants (5 percent) are using advanced analytics to guide litigation strategy.”

 

  • Here’s a video from Artificial Lawyer featuring remarks by Lewis Liu, CEO and co-founder of Eigen Technologies re the limitations of AI. Note especially, “However, we should not lose sight of the limits of these new technologies. We believe firmly that the best combination for accurate data extraction at present is human + machine.”

 

  • Also from Artificial Lawyer, Ambiguity is the Killer of Smart Contracts by David Gingell, CMO, Seal Software. “A recent article by Bill Li, Chief Network Architect at MATRIX AI, entitled “Intelligent Contracts — the AI Solution for the Issue of Security and Smart Contracts” highlighted two very significant flaws in smart contracts: 1) they are not smart enough and 2) they have poor security mechanisms and tools.” The article discusses these criticisms and presents Seal’s solutions.

 

  • If there’s a need/opportunity anywhere for AI to fulfill the promise of making legal work “better, faster and cheaper,” it seems to be in Brazil. This post from Artificial Lawyer includes these amazing stats: “(a) study published by the Brazilian National Council of Justice showed that there were 70.8 million pending lawsuits in early 2014, and other 28.5 million suits were filed during that year, totalling 99.7 million cases pending to be reviewed by the judicial branch.” “In 2016, the judiciary system cost over 84 billion Brazilian Reais (approx. £17 billion) or approx. 1.3% of the Gross Domestic Product.”

 

  • Still more from Artificial LawyerIntegra Blockchain Co. Unveils ‘Smart Leases’ for Real Estate Lawyers.

 

  • Law firms need to get serious about using chatbots to make their arcane websites easier to navigate. Here’s an update on how good chatbots can be these days.

 

  • From Imogen Ireland and Penelope Thornton of Hogan Lovells: Are the UK’s intellectual property laws ready for AI?
  • And from Richard Diffenthal and Helen McGowan of Hogan Lovells: Artificial Intelligence – time to get regulating? “We explored some of the current issues and debates with Karen Yeung, Interdisciplinary Professorial Fellow in Law, Ethics and Informatics at the University of Birmingham.”

 

 

  • Joanna Goodman posted this to Canada’s “Law Society Gazette”. The Only Way is Ethics. She digs into “…the need for responsible technology and a regulatory framework for the use of artificial intelligence (AI), particularly following the Facebook/Cambridge Analytica revelations.”

 

  • This from Baker McKenzie’s Global Dispute Resolution team in London: Update on the European Product Liability Directive – Still Alive & Kicking.

 

  • Not yet taking surveillance cameras seriously? Check this post about their improving night vision. And this one about facial recognition at the Royal Wedding.

 

  • But don’t necessarily get all pessimistic/negative about AI: “One of the world’s most visible environmentalists is optimistic about the future of the planet because of technology. Former US Vice President Al Gore believes advances in machine learning, artificial intelligence, connected devices, and other technology will make it possible for society to reach sustainability goals at record speed. ‘The world is in the early stages of a sustainability revolution that has the magnitude and scale of the industrial revolution at the speed of the digital revolution,’ Gore said at the Bloomberg Sustainable Business Summit in Seattle Thursday.”

 

 

 

I swear I curate this stuff, passing over at least 20 articles and posts for every one that I mention here. It’s just a REALLY busy time for legal AI!

 

  • “(A) survey (of lawyers)by IQPC, ahead of its Legal AI Forum event later this year (18 -19 September in London), gathered feedback from around 200 legal professionals and the following is what they found:” Actually, what they found is here. The findings are very interesting (with several cool infographics), and point to movement regarding AI “toward real and substantive market adoption.”

 

  • Back in December, I mentioned that “LawDroid, has been awarded a contract to build a voice-activated legal aid bot in the US in a major ‘real world’ test of the technology and its access to justice (A2J) capabilities.” Seems they were serious as Artificial Lawyer posted today that “LawDroid, the legal bot pioneer, has helped to develop a new bot (PatBot) with Washington State law firm, Palace Law, that helps clients gather essential information about personal injury claims in the workplace.The objective of the bot is not to give legal advice, but instead to provide a ‘legal health check’ to make sure potential clients are not missing out on their legal rights, or to find if they have omitted key steps in the claims procedure, which most people would not be aware of.” A2J!!

 

  • A smartphone launched in India today (Realme 1) “comes with Artificial Intelligence (AI)-powered real selfies technology, which is capable of recognising 296 recognition points based on age, sex, skin color or tone and precisely get all the face information of the owner.”

 

  • Arnold & Porter’s Rhiannon Hughes wonders Is The EU Product Liability Directive Still Fit For Purpose? “(G)iven the challenges posed by digitisation, the Internet of Things, artificial intelligence and cybersecurity now and in the future, and, if it does not, what changes would be required to address the shortcomings.”

 

  • Cleary Gottlieb Partner Lev Dassin Discusses Influence of Tech on Financial Crimes. The post is not much as much a “discussion,” as an attempt to start a discussion thread rolling.

 

  • From Sidley Austin’s Christopher Fonzone and Kate Heinzelman, What Congress’s First Steps Into AI Legislation Portend. “Although it’s too early to provide a definitive answer about how Congress will react, the past several months have offered the first real clues as to where lawmakers might be headed.” Details here.

 

  • Press release: “Thomson Reuters has enhanced its World-Check One platform with the launch of Media Check, a unique media screening and processing feature powered by artificial intelligence (AI) that helps address the regulatory and reputational consequences of overlooking key data in the fight against financial crime.”

 

 

  • “Amazon is currently working towards the Health Insurance Portability and Accountability Act (HIPAA) requirements so Alexa can start providing healthcare advice and information.” Details here and here.

 

  • Other interesting posts from Artificial Lawyer:

– Reed Smith Rolls Out ‘Innovation Hours’ Towards Billable Targets. Good idea!

– “US law firms appear to have well and truly got behind AI-driven legal research, with Casetext, one of the pioneers in the sector, adding AmLaw 100 firm Blank Rome to a growing list of clients that includes: Quinn EmanuelFenwick & WestDLA Piper, Baker DonelsonOgletree Deakins, and O’Melveny & Myers to name a few.” Other providers and firms are discussed here.

 – Norton Rose Rolls Out ‘Parker’ the Legal Chat Bot for GDPR.

Bryan Cave Leighton Paisner Launches Own Contracting Tool – Swiftagree

 

  • Press release: “According to survey findings released today by Seyfarth Shaw, the majority of business leaders are more “hopeful” about the future of enterprise than last year, with 84 percent expressing optimism compared to 70 percent in 2017.” “Over the next five years, automation and artificial intelligence will have the biggest impact on business operations and processes, according to 62% of survey participants.”

 

  • Here’s another press release from Compliance.ai. With more than 2.3 Million Regulatory Documents Processed, Compliance.ai is providing Chief Compliance Officers With a Competitive Advantage and Transforming RegTech. (I posted the first on August 16, 2017)

 

  • Here’s an interesting list and discussion of “ethical issues raised by the use of AI in healthcare.”

 

  • “YouTube has announced that, as part of its YouTube Red original programming, Robert Downey Jr will host and narrate an eight-part documentary series … to explore AI through a lens of objectivity and accessibility, in a thoroughly bold, splashy, and entertaining way.” Details here.

 

  •  I love the headline to this post from Popular Science. Did artificial intelligence write this post? Maybe. It’s an article with various news tidbits.

 

  • AI and insurance, in China. “Assessing damage caused to their rides has just gotten a lot easier for car owners in China, with the rollout of a video-based, artificial-intelligence app from Ant Financial.” “Dingsunbao 2.0’s secret sauce includes 46 patented technologies, such as simultaneous localization and mapping, a mobile deep-learning model, damage detection with video streaming, a results display with augmented reality and others.”

 

  • When making a presentation about AI yesterday, several in the audience seemed surprised and at least a bit alarmed when I talked about surveillance cameras and AI-based facial recognition being used by governments in public places. I was surprised they were surprised. Get used to it folks, this stuff is real, and it’s not just in China and Dubai. In many ways, trying to protect your privacy is a fight you will not win. To wit, here.
  • From Artificial Lawyer: dealWIP, an “operating system for deals,” …offer(s) ‘a cloud-based workflow integration platform for legal transactions of all types that provides a secure, frictionless and transparent environment for transactional attorneys and their clients’. “The platform will also seek to connect to many other systems a law firm may be using, such as legal AI tools for document review during an M&A deal.” Details here.

 

  • Contrary to the recent move by the UAE, “(a) team of 150 experts in robotics, artificial intelligence, law, medical science and ethics wrote an open letter to the European Union advising that robots not be given special legal status as ‘electric persons,’ The letter says that giving robots human rights would be unhelpful. From an ethical and legal perspective, creating a legal personality for a robot is inappropriate whatever the legal status model”

 

  • Jones Day: “United States: Protecting Artificial Intelligence And Big Data Innovations Through Patents: Functional Claiming.

 

  • From FSTech: “The House of Lords Select Committee on Artificial Intelligence published its AI in the UK: Ready, Willing and Able? report today, with chairman of the Committee, Lord Clement-Jones, noting that the UK has a unique opportunity to shape AI positively for the public’s benefit. “The UK contains leading AI companies, a dynamic academic research culture, and a vigorous start-up ecosystem as well as a host of legal, ethical, financial and linguistic strengths.”

 

  • Malaysia:  “Thanks to a deal between Auxiliary Force and Chinese AI company YITU, certain Polis Bantuan in Malaysia will receive AI-enabled bodycams.  These bodycams will be able to identify wanted parties by the police. These cameras are also able to provide infrared for recordings in dark places, a more compact design, and the ability to potentially livestream bodycams.”

 

  • HoFrom Federal News Radio: “The Pentagon is planning, along with intelligence agencies, on setting up a new office to oversee the acquisition and development of artificial intelligence. Defense officials are alarmed at recent advancements made by U.S. adversaries.”
  • From Artificial Lawyer: Thomson Reuters is again turning to AI tools, now with a contract remediation system to help companies review and repaper legal agreements ahead of Brexit. In this case it will be using AI company Logical Construct, which leverages a combination of natural language processing (NLP) and machine learning techniques to achieve its extraction results.

 

  • From Patent Docs: FDA Permits Marketing of First AI-based Medical Device; Signals Fast Track Approach to Artificial Intelligence.

 

  • SINGAPORE (Reuters) – In the not too distant future, surveillance cameras sitting atop over 100,000 lampposts in Singapore could help authorities pick out and recognize faces in crowds across the island-state. Some top officials in Singapore played down the privacy concerns. Prime Minister Lee Hsien Loong said last week that the Smart Nation project was aimed at improving people’s lives and that he did not want it done in a way “which is overbearing, which is intrusive, which is unethical”.

 

  • Google and AI Ethics: “After it emerged last month that Google was working with the Defense Department on a project for analyzing drone footage using “artificial intelligence” techniques, Google’s employees were not happy.” “(M)ore than 3,000 of the employees signed a letter to CEO Sundar Pichai, demanding that the company scrap the deal.” “Google Cloud chief Diane Greene … told employees Google was ‘drafting a set of ethical principles to guide the company’s use of its technology and products.’” “…Greene promised Google wouldn’t sign up for any further work on ‘Maven’ or similar projects without having such principles in place, and she was sorry the Maven contract had been signed without these internal guidelines having been formulated.”

 

  • House of Representatives Hearing: GAME CHANGERS: ARTIFICIAL INTELLIGENCE PART III, ARTIFICIAL INTELLIGENCE AND PUBLIC POLICY, Subcommittee on Information Technology, APRIL 18, 2018 2:00 PM, 2154 RAYBURN HOB.
  • From Artificial Lawyer: “Ashurst has become the latest major law firm to join smart contract consortium, the Accord Project, which is seeking to build industry standards for this new form of self-executing contracting. Tae Royle, head of digital legal services at Ashurst, said: ‘Smart contracts are already being used to manage hundreds of millions of dollars’ worth of cryptocurrency and digital assets globally. But we lack common standards and frameworks for ensuring legal enforceability of smart contracts.’” The article includes discussion of the need for global regulation and an up-to-date list of the consortium participants.

 

  • Here’s an interesting discussion of the legal risks inherent in insurance companies using chatbots to deal with customers.

 

  • From The Economist, this discussion of the legal risks associated with implementation of various forms of AI in the workplace. If you have an Employment Law practice, your attorneys should be up-to-speed on these issues.

 

  • Speaking of keeping your firm’s practices engaged in AI, investment in AI and AI-related M&A continue to be very robust globally. As is about typical, I noticed three substantial deals just this morning: a $35 million round, “Digital Air Strike announced it has acquired the privately held A.I. chat technology business of Eldercare Technology Inc. (d.b.a. Path Chat),” and “Thales Launches Its Offer on All Gemalto Shares.”

Here’s a report on the overall forecast for the global AI software market through 2022.

 

  • From Norton Rose: “Better, faster, stronger: revamping the M&A due diligence process with Artificial Intelligence platforms.”

 

  • “Global law firm Hogan Lovells today publishes Life Sciences and Health Care Horizons, a forward looking report that identifies current and evolving trends that are shaping the future of the industry.” Of course, AI is included. “

 

  • Recent advances in AI have depended on three underlying advances: 1) more sophisticated analytic algorithms, 2) availability of Big (and Bigger) Data sets, and 3) increased processing power. The latter is about to make another big step forward as, “Nvidia has unveiled several updates to its deep-learning computing platform, including an absurdly powerful GPU and supercomputer.” It will have the processing power of 2000 MacBook Pros. And, “… the company has doubled the memory capability of its Tesla V100 GPU, which the company claims delivers the performance of up to 100 CPUs in one graphics processor. This isn’t the GPU in your gaming PC — it powers artificial intelligence research and deep machine learning.” Details here.

 

  • Well-informed citizens should have a general idea of how AI is being used by the military, so here are a couple of updates:

“DARPA to use artificial intelligence to help commanders in ‘gray zone’ conflicts.” ‘Gray zones’ being “… those in which state and non-state competition becomes conflict but remains below the level of conventional warfare. Experts have pointed to Russia’s use of hybrid threats in Ukraine and other areas, along with China’s aggression in the South China Sea as examples.” “The ultimate goal of the program is to provide theater-level operations and planning staffs with robust analytics and decision-support tools that reduce ambiguity of adversarial actors and their objectives.”

– From The Hill, here’s, “Artificial intelligence is rapidly transforming the art of war,” a good overview of three uses of AI in war, including use of AI to win over the hearts and minds of the people (i.e., propaganda).

 

  • Finally, I expect George Orwell would have found Big Brother’s activities in China too far fetched to make credible fiction. Most recently, “China is using AI and facial recognition to fine jaywalkers via text.” Details here.