• Some law schools (OK, very few) are finally getting serious about teaching the business of law. In this case (i.e., Northwestern), AI “goes to law school.” Their dean, Daniel Rodriguez, is stepping down from that role but remaining on the faculty and joining ROSS, “in an advisory role to help the company build out its law school and access to justice initiatives.” Putting more JDs on the street with no business training (especially tech-related) is a disservice to the profession.

 

  • In this post from August, Ron Friedmann clearly lays out today’s state of implementing AI in the practice of law. These are two pages you absolutely should read. (The poll at the end is based on a haphazard sample of 200+ Twitter users. The results line up pretty well with more rigorous surveys I have seen.)

 

  • According to Artificial Lawyer, after several months piloting, “… Eversheds Sutherland has announced its adoption of legal AI company ThoughtRiver as part of its managed legal services arm, ES Ignite.”

 

  • Here’s a good summary from Lavery de Billy LLP on the state of IP law regarding AI in Canada.

 

  • Bloomberg has launched a new subscription tool called “Points of Law,” “a case research platform that uses AI and data visualization to help attorneys, legal researchers and litigators highlight pertinent legal language, such as new case law or interpretations of a statute, within federal and state court opinions. It also allows users to find such legal language, which it deems “points of law,” across all court opinions in its database.” There are “millions” of such opinions in the database. Details here.

 

  • In a step forward for AI generally, Google’s…”A.I. project, AutoML, has successfully taught machine-learning software how to program machine-learning software. In some cases, the machines programmed better A.I. software than even the Google researchers could design.”

 

  • Yesterday I mentioned a report commissioned by the UK re AI. One of the recommendations was more government support for AI. I enjoyed the title of this post about the report, “Keep Calm and … Massively Increase Investment in Artificial Intelligence.”

 

  • One of the last frontiers for AI is replacing or at least supplementing human “judgment,” one aspect of which is ethical/moral judgment. MIT researchers have been working on this re self-driving cars: “The Moral Machine is an MIT simulator that tackles the moral dilemma of autonomous car crashes. It poses a number of no-win type scenarios that range from crashing into barriers or into pedestrians. In both outcomes, people will die and it is up to the respondent to choose who lives. After nearly a year of collecting over 18 million responses, they have applied the results to the AI program.” This article includes a link through which you can contribute to this crowd-sourced morality.