• Some law schools (OK, very few) are finally getting serious about teaching the business of law. In this case (i.e., Northwestern), AI “goes to law school.” Their dean, Daniel Rodriguez, is stepping down from that role but remaining on the faculty and joining ROSS, “in an advisory role to help the company build out its law school and access to justice initiatives.” Putting more JDs on the street with no business training (especially tech-related) is a disservice to the profession.

 

  • In this post from August, Ron Friedmann clearly lays out today’s state of implementing AI in the practice of law. These are two pages you absolutely should read. (The poll at the end is based on a haphazard sample of 200+ Twitter users. The results line up pretty well with more rigorous surveys I have seen.)

 

  • According to Artificial Lawyer, after several months piloting, “… Eversheds Sutherland has announced its adoption of legal AI company ThoughtRiver as part of its managed legal services arm, ES Ignite.”

 

  • Here’s a good summary from Lavery de Billy LLP on the state of IP law regarding AI in Canada.

 

  • Bloomberg has launched a new subscription tool called “Points of Law,” “a case research platform that uses AI and data visualization to help attorneys, legal researchers and litigators highlight pertinent legal language, such as new case law or interpretations of a statute, within federal and state court opinions. It also allows users to find such legal language, which it deems “points of law,” across all court opinions in its database.” There are “millions” of such opinions in the database. Details here.

 

  • In a step forward for AI generally, Google’s…”A.I. project, AutoML, has successfully taught machine-learning software how to program machine-learning software. In some cases, the machines programmed better A.I. software than even the Google researchers could design.”

 

  • Yesterday I mentioned a report commissioned by the UK re AI. One of the recommendations was more government support for AI. I enjoyed the title of this post about the report, “Keep Calm and … Massively Increase Investment in Artificial Intelligence.”

 

  • One of the last frontiers for AI is replacing or at least supplementing human “judgment,” one aspect of which is ethical/moral judgment. MIT researchers have been working on this re self-driving cars: “The Moral Machine is an MIT simulator that tackles the moral dilemma of autonomous car crashes. It poses a number of no-win type scenarios that range from crashing into barriers or into pedestrians. In both outcomes, people will die and it is up to the respondent to choose who lives. After nearly a year of collecting over 18 million responses, they have applied the results to the AI program.” This article includes a link through which you can contribute to this crowd-sourced morality.
  • Richard Susskind:  By 2036, he posits, “It is neither hyperbolic nor fanciful to expect that the legal profession will have changed beyond recognition.” Check out this article for comments by several others regarding the future impact of AI on lawyering. (My prognostication is that Susskind’s predictions are closer to what we’ll see than those of most of these nay-sayers.)

 

  • One of the developments that has caught me by surprise is the number of major players in AI who have open-sourced their tech. Here’s such an announcement from Intel. This is particularly good news for smaller companies (and law firms) who want to get into the game without huge investments. This author agrees about the benefit to smaller organizations.

 

  • Clifford Chance is staffing up to focus on cutting edge tech.

 

  • Big investments in:

Legal AI: Legal research leader ROSS Intelligence has landed an $8.7M investment from a group led by iNovia Capital. ROSS uses the IBM Watson AI engine and is now working with more than 20 law firms.

Marketing: A group led by Insight Venture Partners has purchased Nashville-based “Emma” a tech-driven eMarketing company.

 

  • The porn industry has been a driver of many tech innovations (e.g., VCRs, DVDs, augmented reality, Internet streaming). Now Pornhub is using facial recognition to identify and tag the “stars” of their videos.

 

  • Here’s a fun comparison by Time of where AI seems to really be headed vs its portrayal in Sc-Fi.
(Original Caption) Boris Karloff, Colin Clive and Dwight Frye in a scene from the 1931 Universal Pictures production of Frankenstein.
  • According to Artificial Lawyer, “Global insurance law firm, Clyde & Co, has launched a special consultancy division dedicated to helping clients with smart legal contract and blockchain technology”

 

  • Here’s a Q&A with Luis Salazar re Salazar Law’s experience using ROSS to do legal research. Spoiler: GC’s love it.

 

  • Last week, there was a “Nordic Legal Tech Day” in Stockholm. The keynote was: Richard Tromans’ “Legal AI – Where it Stands Today and What it Means For Lawyers and Clients.” This stuff is everywhere!

 

  • Speaking of conference keynotes, in preparation for next year’s EU General Data Protection Regulation, this Wednesday the UK’s Law Society will hold a conference titled “Legal services in a data driven world.” The keynote will be by Dave Coplin, chief envisioning officer, The Envisioners Ltd.

 

  • Thomson Reuters is one of the best and most easily integrated sources of Big Data for law firms. They have just upgraded their CLEAR online investigations suite to include data re business and organizational to complete identity verification tasks, including risk evaluation and business vetting. Details here.

 

 

  • Algorithmic stock trading has been around for a long time, but this weekend there were a couple of good articles on the subject in case you’d like a refresher or an update on the subject.

 

  • There’s a LOT of money being invested on AI. This article from Barron’s has some thoughts on the subject.

 

  • This very readable essay by Danny Guillory and published by GE argues that we need to get focused on AI’s sensitivity to diversity in all of its forms, including ethnicity, gender, ,age, culture, tradition, and religion.

 

  • This article from Forbes presents “Five Reasons Why Corporations May Be Slow To Adopt AI.” They all apply to law firms.

 

  • In the mood for futurist speculation about AI? Then check out this summary of Max Tegmark’s new book: “Life 3.0: Being Human in the Age of Artificial Intelligence.”

 

  • Just for fun: A History of Artificial Intelligence in Top 10 Landmarks.

 

  • Irish firm McCann Fitzgerald has chosen Neota Logic as its AI vendor. This is Neota’s first foray into Ireland.

 

 

  • Several studies have suggested that SMEs (small and medium enterprises) are more open to AI than larger enterprises. Here’s one such from the UK.

  • China intends to be the world leader in AI by 2030, and late July’s second annual Legal+Technology New Champions Annual Convention was consistent with that theme. The western participants in Hangzhou included folks from Neota Logic, Thomson Reuters, IBM Watson, Ross and Richard Susskind. “The event  was hosted by Shanghai-based ‘BestOne Information & Technology Co.’, a legal services provider whose flagship offering is a consumer-facing platform for quick legal consultations delivered through an online and voice platform. (Since 2009, BestOne has delivered more than 21 million legal consultations.)” I look forward to learning more about ‘BestOne.’ A quick web search yielded little info.

 

  • Here’s a good overview of the applicability of AI to smaller firms.

 

  • Yesterday’s keynote address by Senior Minister of State Indranee Rajah, Ministry of Law and Ministry of Finance, at the “Future Lawyering Conference 2017” in Singapore is definitely worth reading. He mentioned many recent uses of AI in the law (e.g., DoNotPay, COIN, Verifi, FLIP, ROSS) and concludes that, “(i)t is those lawyers who are able to innovate and adapt and adopt technology who will win the future.”

I found it especially interesting that he specifically discussed the need for law firms to market their services:

Finally, let me say something about branding and marketing.
– You can be the best lawyer or the best firm with the best technical skills but if clients don’t know you, you won’t get the work. Out of sight is out of mind.
– Singapore law practices need to more actively market their services. A Law Society study of small and medium sized practices found that 53% of interviewees had no deliberate business development plans.
– Business development should not be left to chance. Singapore law practices must have proper business development plans to grow their businesses especially if they want to regionlise and internationalise.
– Branding and marketing will become even more important as law firms regionalise. And this is branding and marketing not just for law firms but for the individual lawyers themselves. Because each of you have to differentiate yourselves from the others, in order to capture your share of the work. So you have to think of your value proposition: what is it that differentiates you from the others? What is your personal brand? And then you have to work out how it fits into your firm’s brand.

 

  • And Singapore is one of the many countries/governments looking to AI to help propel them into the future. Singapore in particular cites a recent Accenture study’s prediction that AI will “nearly double Singapore’s annual economic growth rates by 2035. The research also found that Singapore is at the forefront to integrate innovation and technologies into the wider economy, ahead of the largest economies in the world such as the United States, Germany, United Kingdom & Japan.” This will be driven, at least in part, by Singapore’s Smart Nation initiative.

On the legal front, Singapore is already using AI to fight money laundering.

And China has similar intentions, “China intends to become the world’s artificial intelligence leader in 2030, according to the manifesto it just released describing plans to create an industry of $150 billion and an environment that has AI “everywhere.”

 

  • We need anti-robot legislation. I agree with the author of this editorial from the NYT who cites “bots” behaviors from annoying to very bad as reasons for a “Blade Runner Law.”

– For popular Broadway shows (need we say “Hamilton”?), it is actually bots, not humans, who do much and maybe most of the ticket buying. Shows sell out immediately, and the middlemen (quite literally, evil robot masters) reap millions in ill-gotten gains,

– Product reviews have been swamped by robots (which tend to hand out five stars with abandon),

– In coming years, campaign finance limits will be (and maybe already are) evaded by robot armies posing as “small” donors,

– During the French election, it was principally Twitter robots who were trying to make #MacronLeaks into a scandal,

– Facebook has admitted it was essentially hacked during the American election in November, and

– This spring, the Federal Communications Commission put its proposed revocation of net neutrality up for public comment. In previous years such proceedings attracted millions of (human) commentators. This time, someone with an agenda but no actual public support unleashed robots who impersonated (via stolen identities) hundreds of thousands of people, flooding the system with fake comments against federal net neutrality rules.

As a remedy, the author suggests, “(t)he ideal anti-robot campaign would employ a mixed technological and legal approach. Improved robot detection might help us find the robot masters or potentially help national security unleash counterattacks, which can be necessary when attacks come from overseas. There may be room for deputizing private parties to hunt down bad robots. A simple legal remedy would be a “ Blade Runner” law that makes it illegal to deploy any program that hides its real identity to pose as a human. Automated processes should be required to state, “I am a robot.” When dealing with a fake human, it would be nice to know.”

 

  • And to finish up on a bit of a frightening note, Russia has joined the US and China in the integration of AI into its missile systems. (“It’s not just missiles that will get a robotic makeover. In May, the head of another leading weapons group said he wanted to bring artificial intelligence to “swarms of drones.”)Have a great weekend!

Artificial-IntelligenceToday in Legal AI

  • Altman Weil has released its 2017 Law Firms in Transition Survey. One of the questions is: “Technology tools that incorporate artificial intelligence (AI) and machine learning — like Watson and Ross — are beginning to be adopted by some law firms. What is your firm’s stance on the use of AI tools?” With this narrow definition, only 7.5% “are already beginning to make use of these tools.” Of course, many more than this are already using eDiscovery and other forms of AI. In these survey results there is a very strong positive correlation between size of firm and use of AI tools.

and  http://economictimes.indiatimes.com/news/et-tv/how-often-do-we-encounter-artificial-intelligence-in-our-lives/videoshow/59086427.cms

and https://venturescannerinsights.wordpress.com/2017/06/08/artificial-intelligence-funding-trends-through-q2-2017/

“[AI is] doing for minds what the Industrial Revolution did for muscles.”

and https://thenextweb.com/contributors/2017/06/12/artificial-intelligence-revolutionizing-customer-management/?utm_campaign=Submission&utm_medium=Community&utm_source=GrowthHackers.com#.tnw_WavBq4R0

AI

Continue Reading Today in Legal Artificial Intelligence

This piece provides a good summary of today’s most important applications of AI in the provision of legal services, a bit of a look into the near future and a discussion of the ethical considerations inherent in AI and the law. As is often the case with rapidly evolving technologies, laws and regulation substantially lag AI’s capabilities (not just in the practice of law). A few governments (e.g., the UK and EU) are taking steps to catch up, but none are where they need to be.

binary-1872282_640

Ethical Conundrums in AI