• This story has received VERY wide coverage, with headlines including:

Stephen Schwarzman Makes Anchor Gift For New $1 Billion School Of Artificial Intelligence At MIT;

MIT announces $1b outlay for study of artificial intelligence, computing; 

M.I.T. Plans College for Artificial Intelligence, Backed by $1 Billion;

MIT commits $1 billion to make AI part of every graduate’s education;

M.I.T. wants to build an AI-focused college using a ‘planned investment’ of $1 billion.

From Simpson Thatcher: “The Firm represented Blackstone Chairman and CEO Stephen A. Schwarzman’s foundation in connection with the foundation’s $350 million gift to the Massachusetts Institute of Technology. The gift is a portion of a $1 billion investment to establish a college for computing and artificial intelligence. The college, called the M.I.T. Stephen A. Schwarzman College of Computing, will address the global opportunities and challenges presented by the prevalence of computing and the rise of artificial intelligence.”

Coverage herehereherehere and here.


  • William Hays Weissman of Littler postedWhy Robot Taxes Won’t Work. Several arguments to support the thesis are presented, including: “… from a tax administration perspective, robots pay no income tax because they do not earn income, pay no sales tax because they do not purchase items, and pay no property tax because they do not own anything”


  • Knobbe Martens publishedFDA Expresses Priorities for Clinical Trial Efficiency, Artificial Intelligence. “According to (FDA Commissioner Scott Gottlieb, M.D.), clinical trials “are becoming more costly and complex to administer” while “new technologies and sources of data and analysis make better approaches possible.” In order to take advantage of these better approaches, Gottlieb pointed to the FDA’s Breakthrough Devices Draft Guidance, which proposes streamlined procedures to develop flexible clinical trial designs for important medical devices. This will allow the FDA to “evaluate . . . innovative devices more efficiently.” Six breakthrough devices have already been cleared using this program.”


  • From GoodwinTreasury Department Imposes Mandatory Filing Requirement on Parties to Certain Foreign Investments in U.S. Critical Technology Companies. “‘Emerging and foundational technologies’ soon to be controlled pursuant to a separate, interagency process underway and expected to target technologies not currently subject to ITAR or EAR controls, possibly including technologies relating to artificial intelligence, robotics, cybersecurity, advanced materials, telecommunications, and biomedicine, among others.”


  • From Osborne ClarkeShaping future competition law enforcement in digital markets | Furman review calls for evidence. “The first set of questions in the call for evidence asks about the substantive analysis of competition in digital markets and considers: … artificial intelligence tools and their impact on competition, including whether algorithmic pricing raises new competition concerns.”


  • Can artificial intelligence change construction? “As IBM’s Watson adds its computational power to construction sites, tech sees an industry in need of an upgrade.” “On especially complicated projects, Fluor (a global engineering and construction company) will begin using two new tools, the EPC Project Health Diagnostics and the Market Dynamics/Spend Analytics, to make sense of the thousands of data points found on a crowded construction site. Constant analysis will help forecast issues before they show up, and automate how materials and workers are distributed.” “Fortune found many tech firms investing billions in construction tech firms, including Oracle, which purchased Aconex for $1.2 billion in February, and Trimble, which bought Viewpoint for $1.2 billion in April.” Much more here.


  • Mintz publishedStrategies to Unlock AI’s Potential in Health Care, a Mintz Series. “The Journal of the American Medical Association in its September 18, 2018 issue included four articles on deep learning and Artificial Intelligence (AI). In one of several viewpoint pieces, On the Prospects for a (Deep) Learning Health Care System, the author’s conclusions aptly describe why health care providers, entrepreneurs, investors and even regulators are so enthusiastic about the use of AI in health care: Pressures to deploy deep learning and a range of tools derived from modern data science will be relentless, given the extraordinarily rich information now available to characterize and follow vast numbers of patients, the ongoing challenges of making sense of the complexity of human biology and health care systems, and the potential for smart information technology to support tomorrow’s clinicians in the provision of safe, effective, efficient, and humanistic care.”


  •  of Hunton postedLawyering Cashierless Technologies. “There is no doubt that there’s a revolution coming to the way consumers buy goods at brick and mortar stores as retailers seek to better meet customers’ need for speed and create novel shopping experiences. However, with this revolution comes new risks. There are a wide range of potential issues that retailers should consider before launching cashierless technology….”


  • Press releaseFirst-Ever Virtual Law Firm Puts Clients First. “By using Artificial Intelligence and robots, they’re (“2nd.law”) able to provide legal services for their clients at a steeply discounted price — up to 75% lower than the rates and fees that traditional firms offer — all while putting client relationships first.”


  • Lloyd Langenhoven of Herbert Smith Freehills posted this thoughtful piece: The symbiotic relationship between lawyer and legal tech. “Continued and efficient success for the legal profession, on both a macro and micro scale, lies in the ability of the profession to foster a symbiotic relationship between legal technology, client’s expectations and traditional legal knowledge. The future looks bright and exciting for the legal profession and it is about time our professional dusted the cobwebs off and donned a new, futuristic suit.”


  • This article from The Law Society Gazette frequently cites Brown Rudnick’s Nicholas Tse: IBA Rome: Artificial intelligence must mean strict liability – and higher insurance premiums. “‘The law needs to try not to multiply problems of dealing with AI and should not invest AI with legal personality,’ he told a session moderated by Law Society president Christina Blacklaws. ‘The work of the law is to try and be pragmatic, ensuring accountability while not stifling progress.’”


  • This from an associate at a major City law firm: “For at least a year I have been reading in the legal press how wonderful corporate law firms are with technology and how their pioneering work with artificial intelligence is unleashing a ‘Fourth Industrial Revolution’/’profound paradigm shift’/’New Law 2.0 era’/insert buzz-term of choice that will fundamentally change the profession. But when I look around all I can see is some new laptops and phones given to us by our supposedly tech-savvy firms. This despite my own employer aggressively marketing itself as some kind of futuristic Silicon Valley-style start-up.”


  • From Lawyerist.comHow an Online Game Can Help AI Address Access to Justice (A2J). “It is a truth universally acknowledged, that the majority of those in possession of legal problems, remain in want of solutions.1 Also, ROBOTS!  Ergo, we should throw AI at A2J. There is considerably less consensus, however, on how (or why exactly) this should be done. But don’t worry! There’s an app/game for that, and it let’s you train artificial intelligence to help address access-to-justice issues.”


  • Holly Urban, CEO at EffortlessLegal posted: Artificial Intelligence: A Litigator’s New Best Friend? “This article is intended to help litigation attorneys looking to utilize AI to maximize their outcomes with minimal additional effort or expense.” In conclusion, like several before her, she reminds us: “As the ABA Model Rules of Professional Conduct states, ‘To maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology, engage in continuing study and education and comply with all continuing legal education requirements to which the lawyer is subject.'”


  • This story has been widely reported: Stephen Hawking feared race of ‘superhumans’ able to manipulate their own DNA. “Before he died in March, the Cambridge University professor predicted that people this century would gain the capacity to edit human traits such as intelligence and aggression. And he worried that the capacity for genetic engineering would be concentrated in the hands of the wealthy. Hawking mulled this future in a set of essays and articles being published posthumously Tuesday as ‘Brief Answers to the Big Questions….‘” “Once such superhumans appear, there are going to be significant political problems with the unimproved humans, who won’t be able to compete.” More coverage here and here.


From Artificial Lawyer:


  • Suffolk Law School Uses Reddit to Create Legal Question A2J Taxonomy. “A collaboration of Suffolk Law School’s Legal Innovation and Technology Lab in the US and Stanford Law School’s Legal Design Lab with funding from The Pew Charitable Trusts is taking legal questions from consumers posted on social media site Reddit, and using them to create a taxonomy of legal issues to help train A2J tech applications. Sounds unusual? At first it does, but when you look deeper it all makes sense. David Colarusso,  Director of Suffolk University Law School’s Legal Innovation and Technology Lab, explained to Artificial Lawyer what this is all about.” Here‘s the post.


  • I particularly enjoyed this opinion piece by the founder of Artificial Lawyer, Richard Tromans: The Politics of Legal Tech – Progressives vs Conservatives. “There are clearly then a wide range of views and goals when it comes to legal tech. We are not all on the same page. There are divisions. There are competing narratives. There is a battle of ideas to see which ones win out and different people, firms and organisations are arguing for different points of view. The legal tech world is, in a word, political.”



  • The state of blockchain: 11 stats. “How many CIOs are actively adopting or experimenting with blockchain? Dig into telling data from multiple sources.” Here’s the story from The Enterprisers Project.

  • This from Artificial Lawyer: Integra Ledger Launches Tools to Add Blockchain Tech to All Legal Software.
  • American Lawyer discusses Altman Weil’s latest Law Firms in Transition Survey here. With findings like this:

“…law firm leaders are increasingly fed up with their partners’ resistance to change. Meanwhile, half of firm leaders said there’s nothing especially different about their firms compared to their competitors,”

it’s a good dose of reality. I also love this quote from the always astute Tom Clay: “(t)he whole profession is changing rapidly, and to just drift along, make incremental changes and not have a dedication to innovation in some way seems idiotic to me.” Here’s the link to the whole study.

(In my experience, Altman Weil has always been careful about methodologically sound research. This survey has 801 law firm leaders responding for an overall maximum margin of sampling error of +/-3.5% at the 95% confidence level. You won’t find many surveys in our industry with less potential for non-response bias; they managed an excellent overall 50% response rate. Nice. With few exceptions, they do not break the data down into segments too small for meaningful statistical analysis, and their longitudinal trends are all based on solid samples. I am annoyed that they report findings with unjustified decimal point precision (e.g., 27.3%), especially when they get into very small segments such as 1000+ lawyer firms. The verbatim comments add nice color to the statistical findings.)


  • While speaking on Silvia Hodges Silverstein’s “Buying Legal Council” webinar this morning, I believe I said “better, faster, cheaper” four or five times. This post (How AI Is Making Prediction Cheaper) on HBR‘s IdeaCast by Avi Goldfarb, a professor at the University of Toronto’s Rotman School of Management, does a good job of explaining how machine learning accomplishes those goals.


  • Because of its Big Data underpinnings, AI faces all kinds of issues with the GDPR. This post discusses the implications for the insurance industry’s uses of AI. It’s mainly bad news, but some is positive. “Regulators are beginning to teach robots who’s the boss.” More insights from Saul Ewing Arnstein & Lehr regarding InsurTech Fundamentals and Compliance Strategies for Implementation here.


  • From Artificial Lawyer: “LinkSquares, the Boston, US-based legal AI contract analytics platform for inhouse lawyers, has just raised $2.16 million in a seed funding round in which Regent Private Capital contributed $1 million.” The post includes an interview with Vishal Sunak, Co-Founder and CEO of LinkSquares.


  • Also from Artificial LawyerThomson Reuters Sees AI + Blockchain Creating New Risks for Financial Services. The discussion is about compliance issues and there’s a link to a full report.


  • From American (not Artificial) Lawyer: Orrick Snags Weil Gotshal’s Patent Litigation Co-Chair (Jared Bobrow). Explaining the hire, “Orrick Herrington & Sutcliffe chairman Mitchell Zuklie believes Silicon Valley is on the cusp of a new wave of IP litigation. Instead of networking and smartphones, the new flash points will likely involve blockchain, artificial intelligence, connected vehicles and virtual reality.” Here’s the link.


  • “Seyfarth Shaw LLP announced today the formal launch of its Blockchain Technologies team, an interdisciplinary group of lawyers who counsel clients and interface with regulators to address legal issues raised by blockchain technology. Seyfarth’s Blockchain Technologies team comprises attorneys with a variety of legal practices – including Corporate, Securities, Labor & Employment, Litigation, Derivatives, Real Estate, Banking, International, Tax, Employee Benefits and Immigration Compliance….” Details here.


  • “Hunton Andrews Kurth LLP and client Ocwen Financial Corporation, … have been named 2018 ACC Value Champions by the Association of Corporate Counsel” for a “due-diligence model enabled documents to be prioritized and fed into a custom platform that utilized artificial intelligence-enabled automatic text summarization and guided review.” (Better, faster, cheaper.)


  • From Jones DayFDA Permits Marketing of First Autonomous Artificial Intelligence-Based Medical Device.


  • The US and UK are formally cooperating on the military use of AI. “Defence Secretary Gavin Williamson announced the launch of a new artificial intelligence hub as he hosted the first ever joint US-UK Defence Innovation Board meeting yesterday (21 May) to explore important areas of co-operation that will maintain military edge into the future.” Details here.


  • This is sooooo not the world I grew up in. Take the Big Three automakers for instance. Ford recently announced that it’s stopping production of all passenger cars except the Mustang, and in this post from Fortune, Mary Barra, the female (yahoo!!!) CEO of GM discusses the company’s reinvention as a tech enterprise. Among her thoughts: “(a)utonomous technology that’s safer than a car with a human driver is coming, and it’s going to get better and better and better with technologies like artificial intelligence and machine learning.” “In October she unveiled an audacious aspirational goal to focus everyone on a set of long-term targets: zero accidents, zero emissions, and zero congestion.” Hear! Hear!


  • Also from that issue of Fortune: “In our survey of Fortune 500 CEOs this year, a majority of respondents—54%—said AI was “very important” to the future of their companies. That’s up from just 39% last year, and far more than those who cited other technologies like advanced robotics (19%), virtual reality (16%), blockchain (14%), 3-D printing (13%) or drones (6%) as very important.” (I haven’t been able to find an explanation of the research methodology online, but prior years’ surveys have been almost a perfect census of the targeted CEOs, so sampling error is irrelevant and non-response bias is trivial.)


  • Google’s Duplex may not be alone in passing the Turing Test. Seems Microsoft has done it in Chinese! “While Google Duplex, which lets AI mimic a human voice to make appointments and book tables through phone calls, has mesmerised people with its capabilities and attracted flak on ethical grounds at the same time, Microsoft has showcased a similar technology it has been testing in China. At an AI event in London on Tuesday, Microsoft CEO Satya Nadella revealed that the company’s Xiaoice social chat bot has 500 million “friends” and more than 16 channels for Chinese users to interact with it through WeChat and other popular messaging services.” Details here.
  • From Artificial Lawyer: Thomson Reuters is again turning to AI tools, now with a contract remediation system to help companies review and repaper legal agreements ahead of Brexit. In this case it will be using AI company Logical Construct, which leverages a combination of natural language processing (NLP) and machine learning techniques to achieve its extraction results.


  • From Patent Docs: FDA Permits Marketing of First AI-based Medical Device; Signals Fast Track Approach to Artificial Intelligence.


  • SINGAPORE (Reuters) – In the not too distant future, surveillance cameras sitting atop over 100,000 lampposts in Singapore could help authorities pick out and recognize faces in crowds across the island-state. Some top officials in Singapore played down the privacy concerns. Prime Minister Lee Hsien Loong said last week that the Smart Nation project was aimed at improving people’s lives and that he did not want it done in a way “which is overbearing, which is intrusive, which is unethical”.


  • Google and AI Ethics: “After it emerged last month that Google was working with the Defense Department on a project for analyzing drone footage using “artificial intelligence” techniques, Google’s employees were not happy.” “(M)ore than 3,000 of the employees signed a letter to CEO Sundar Pichai, demanding that the company scrap the deal.” “Google Cloud chief Diane Greene … told employees Google was ‘drafting a set of ethical principles to guide the company’s use of its technology and products.’” “…Greene promised Google wouldn’t sign up for any further work on ‘Maven’ or similar projects without having such principles in place, and she was sorry the Maven contract had been signed without these internal guidelines having been formulated.”


  • House of Representatives Hearing: GAME CHANGERS: ARTIFICIAL INTELLIGENCE PART III, ARTIFICIAL INTELLIGENCE AND PUBLIC POLICY, Subcommittee on Information Technology, APRIL 18, 2018 2:00 PM, 2154 RAYBURN HOB.
  • Better, faster, cheaper: from Artificial Lawyer, “The UK’s Serious Fraud Office (SFO) has announced a partnership with US-based legal data company, OpenText, to make use of its Axcelerate AI-driven doc review tool.The SFO said in a statement that ‘by automating document analysis, AI technology allows the SFO to investigate more quickly, reduce costs and achieve a lower error rate than through the work of human lawyers alone’.” More here from the Law Society Gazette.


  • Smart Contracts: More about the Accord Project: “Freshfields Bruckhaus Deringer, Allen & Overy (A&O), and Slaughter & May joined the Accord Project, which already has some of biggest law firms in the world as members. The project is pushing for the adoption of an open source technical and legal protocol that will accept any blockchain or distributed ledger technology – a so-called ‘blockchain agnostic’ standard.”


  • From Reed SmithUK government publishes the Digital Charter and reaffirms creation of the Centre for Data Ethics and Innovation.


  • From Knobbe Martens: “According to a U.S. Food and Drug Administration press release, Viz. AI Contact application was granted De Novo premarket review to Viz.AI’s LVO Stroke Platform. According to PR Newswire, Viz.AI’s LVO Stroke Platform is the “first artificial intelligence triage software” and its approval begins “a new era of intelligent stroke care begins as regulatory approval.”



  • According to American BankerBank of America, Harvard form group to promote responsible AI. “(Cathy) Bessant, the bank’s chief operations and technology officer, wanted to bring in an academic perspective and she wanted to create a neutral place where experts from different sectors and rival companies could discuss AI and craft good policies.”


  • Horizon Robotics has debuted a new HD smart camera that boasts serious artificial intelligence capabilities and can identify faces with an accuracy of up to 99.7 percent, the company claims.


  • Ever wonder whether the text you’ve used to promote your product or service is perfectly aligned with your brand strategy? Well, “Qordoba, … today announced a revolutionary new capability for scoring emotional tone in product and marketing content.” “Qordoba’s content scoring is based on Affect Detection, a computer science discipline that applies artificial intelligence and machine learning to understand the primary emotion conveyed by written text. …, to identify the emotion associated with a specific combination of words, allowing developers and product teams to create more effective user interfaces (UI).”


  • Interesting, from Science Magazine: Could artificial intelligence get depressed and have hallucinations? “As artificial intelligence (AI) allows machines to become more like humans, will they experience similar psychological quirks such as hallucinations or depression? And might this be a good thing?”
  • This post on the LMA’s Strategies+ blog (originally from last month’s issue of the magazine Strategies) has a bit of a clickbait title, “AI Is the Future of Everything, Right? Not So Fast.” The actual substance is a list of seven things law firms could be doing now to improve the business of their firms. It comes with appropriate cautions by Axiom’s Mark Masson and Sean Williams–hence the title. (These guys know what they’re talking about.)


  • And speaking of hyperbole, how about “Law Firms Moving to iManage Records Manager 10 from LegalKEY at Exponential Pace.” No doubt that iManage seems to reel in another high profile client every few days — they’re doing very well. But I’d like to see the evidence behind the “exponential movement” in this press release. The only statistic cited is that they have doubled their revenue.


  • Much is being written about AI tech enabling porn producers to realistically insert the faces of celebrities onto the bodies of porn performers. “Using machine learning and AI to swap celebrities’ faces onto porn performers’ (results in) (f)ake celebrity porn seamless enough to be mistaken for the real thing. Early victims include Daisy Ridley, Gal Gadot, Scarlett Johansson, and Taylor Swift.” According to Danielle Citron, a law professor at the University of Maryland and the author of Hate Crimes in Cyberspace, this is going to hard to stop. “There are all sorts of First Amendment problems because it’s not their real body.” Since US privacy laws don’t apply, taking these videos down could be considered censorship—after all, this is “art” that redditors have crafted, even if it’s unseemly.”



  • And from Bird & Bird, a comprehensive of Europe’s state of Competition (a.k.a., Antitrust) Law and regulation regarding “Big Data.” (Interesting that the GDPR is not mentioned.)


  • “The Global Legal Hackathon (globallegalhackathon.com) will arrive in South Africa for the first time next month, the Hague Institute for Innovation of Law (HiiL) announced today. … The event, set for Johanneburg, will see local innovators compete for a place at a New York showcase of the best lawtech innovations in the world.” A2J organization HiiL will host the event in association with Hogan Lovells.


  • Yesterday I mentioned some of the AI hype going on at Davos. There’s more. Here’s a summary.


  • Here’s an interesting parallel to the legal industry. It seems advertising agencies are facing the same sorts of AI threats as law firms from faster moving agencies, new types of entrants performing similar services, and clients more likely to serve themselves. From the article: ““The advertising landscape has changed dramatically in the last couple of years and the pace of change means the industry is now facing fierce competition from everywhere.” … “This has allowed for consultancies, technology, and media companies, as well as new types of advertising agencies, defined or not, to come into the industry.” I don’t expect Don Draper saw this coming.


  • Ireland is branding itself “the AI island.” Toward that end, the University of Limerick has launched “Ireland’s first master’s degree in artificial intelligence.” “Companies that have an AI presence established in Ireland include: Siemens, Zalando, SAP, HubSpot, Deutsche Bank, Amazon Web Services, Salesforce, Ericsson, Intel, Dell EMC, Microsoft, Fujitsu, Mastercard, Nokia Bell Labs, Huawei, LogoGrab and Soapbox Labs, to name but a few.”

On a related note, “UK teenagers (year 9 pupils) are to be taught about artificial intelligence with the launch of a new deep learning teaching kit.” “AI is already part of our everyday lives, and by the time today’s 13-year-olds are entering the workforce, it will have a significant impact on the kinds of jobs available to them,” said Beverly Clarke, the project leader.

  • It’s great to see 2018 start off without any of the AI hype that we saw in 2017. By the way, I noticed these stories this morning (italics mine):

– Google CEO Sundar Pichai said Friday on MSNBC, “AI is one of the most important things that humanity is working on. It’s more profound than, I don’t know, electricity or fire.”

– Piccadilly Group has launched NEURO, an AI business platform that will eradicate the need for one in four management consultants, saving businesses approximately $122.bn globally.


  • This author says “AI is the future of accounting,” but does not believe it will eliminate accountants’ jobs. Interesting. He stipulates that: “…if the AI system is well configured, it can eliminate accounting errors that are generally hard to find and thereby reduce our liability and allows us to move to a more advisory role.” I wonder how many accountants are interested in or able to assume “a more advisory role.” I have the same question, but to a lesser degree, re lawyers.


  • This from the ‘getting in front of crime/corruption’ desk, “a computer model based on neural networks that calculates the probability in Spanish provinces of corruption, as well as the conditions that favor it.”


  • Ken Grady does not get excited about developments without reason, and his recent post about MDR Labs shows real enthusiasm for this effort to actually DO something in LegalTech. He and several luminaries serve on the Lab’s advisory board. Details here.


  • Jones Day just posted this brief on “FDA’s Evolving Regulation of Artificial Intelligence in Digital Health Products.”


  • Perkins Coie announced that Matt Kirmayer has joined the firm’s San Francisco office as a partner. His portfolio includes AI clients.


  • As this year progresses. I expect it will be increasingly difficult to separate cybersecurity from AI. So, I find it noteworthy that Cooley just added three more cybersecurity partners.


  • Just what we need, another entrant into the “using AI for contracts” space. I think this one (Evisort) deserves note because it was created by students at Harvard Law.


  • More from Philip Segal, something of a follow up to my reference yesterday. In this one, he cautions that lawyers face ethical problems if they don’t keep track of (and accept responsibility for) what their AI systems are doing. “…(T)oo much passivity in the use of AI is not only inefficient. It also carries the risk of ethical violations.”


  • iManage was just named a winner in the Best Use of Technology category at the Eclipse Proclaim Modern Law Awards for its innovative use of AI technology by the Serious Fraud Office (as opposed to the Comical Fraud Office?), a UK-governmental department in charge of prosecuting complex cases of fraud and corruption. SFO recently utilized iManage’s RAVN artificial intelligence (AI) platform to help a team of investigators sift through 30 million documents. Details here.


  • Here’s a lengthy but engaging narrative about how one firm embraced (or rather will embrace) Machine Learning and other technologies to improve client service and quality of life for its lawyers. “Science Fiction – A Day in the Life of a Lawyer in 2030.”


  • Well, we’ve heard from just about everyone else, and now Pope Francis has weighed in on AI: “Artificial intelligence, robotics and other technological innovations must be so employed that they contribute to the service of humanity and to the protection of our common home, rather than to the contrary.” Hard to argue with that sentiment, but the devil will be in the details.

Along those lines, Element.AI — which last year raised $102 million to work with charities, non-governmental organizations and others on “AI for good,” is opening an outpost in London, its first international expansion.


  • After a few delays, the first Amazon Go store opened to the public in Seattle yesterday with no cashiers or checkout lines. They haven’t announced any plans for more stores, but remember, Amazon now owns Whole Foods.


  • I’m sure many of my readers are tired of my warnings about AI’s coming exacerbation of our wealth inequality crisis (yes, “crisis”). If so, skip this post. If not, you may find this report from the World Economic Forum’s annual summit (based on research by Deloitte) interesting. It focuses mainly in income inequality in India, China and South Africa.


  • AI news from Manzama: Manzama Signals is ready for prime time. “Manzama Signals is our newest innovation designed to help firms leapfrog the competition by using proprietary algorithms to identify activities or indicators that may signal opportunities for law firms thereby helping legal professionals to better act upon opportunities. Signals employs data driven models to inform decision-making by identifying key indicators signaling that a company may have a need for legal services, or that trends are developing within industries which may lead to significant increases in the need for legal services within those industries.”


  • Here, from Amy Spooner, is an interesting discussion of investments in Alternative Legal Service Providers (ALSPs). I particularly like this quote: “Innovation is not occurring in law firms or legal departments, …(i)t’s occurring among the businesspeople who want to get around the legal department, and then it’s adopted by younger lawyers who recognize that it will make their lives easier. It’s almost got to overwhelm the powers-that-be before they officially endorse it.” And this one: “Lawyers shouldn’t cede that ground to vendors, computer scientists, or venture capitalists. They should embrace, own, create, and contribute to the development of tools that can make them better able to serve in whatever capacity they act as lawyers.” Ethics are touched on in the piece and there’s a pretty deep dive into the state of A2J.


  • In this piece, attorney Philip Segal has some good tips for lawyers beginning to work with AI tools, though he does underestimate AI’s ability to “guess” and “imagine.”


  • From Artificial Lawyer: “…Berwin Leighton Paisner has chosen litigation tech company Opus 2‘s electronic trial platform, Magnum. … (I)ts key benefit is that it allows parties involved in litigation to work together seamlessly, annotate documents, create hyperlinks between them and operate in a gated wholly digital space, which removes the need for huge paper bundles.”


  • Also from Artificial Lawyer, a deeper dive than my prior post into email (and other time spent on mobile devices) management tool Zero. “Boosting Law Firm Profits By Capturing Mobile Device Time”


  • IMHO, Salesforce (Einstein) has been the leading AI-enabled CRM platform for some time. Now they are partnering with IBM (Watson); they’re now “preferred providers” of/for each other. This press release doesn’t have a lot of specifics, so we’ll have to watch to see what really comes of the partnership.


  • Here’s a very brief but interesting overview of the industries to which Elon Musk is applying AI.


  • Welcome to the club: India and Japan have announced plans to collaborate in the development of military AI and robots. And here’s a rather deep dive into how AI is being used by the US military and beyond. How’s this for just a bit scary: “(c)yber- and electronic warfare-hardened, network-enabled, autonomous and high-speed weapons capable of collaborative attacks,” and “AI devices that allow operators of all types to ‘plug into and call upon the power of the entire Joint Force battle network to accomplish assigned missions and tasks.'”


  • Maybe not scary, but I find this useful advance in AI at least a bit creepy: “FDA Approves Artificial Intelligence That Can Predict Death.”


  • These 10 AI predictions for 2018 are more insightful than most — worth a quick read.


  • Perhaps I should have saved this little 150-pager from Microsoft for a weekend, but I didn’t want to wait. Here’s “The Future Computed, Artificial Intelligence and Its Role in Society.” “Law” is mentioned 93 times, looking back, discussing the present, and even forecasting a bit. Topics such as privacy, ethics, the need for regulation, jobs, anti-trust, liability, and data protection are covered. There’s a full chapter titled, “Principles, Policies and Laws for the Responsible Use of AI.” “The authors say, as I have time and again, smart lawyers will take up a new practice called “AI Law.”