• Brave New World of (Robot) Law. “In a new report, the U.S. Chamber of Commerce contemplates not-so-distant questions about robot law.”

 

  • From Shearman & Sterling: Artificial Intelligence and Algorithms in Cartel Cases: Risks in Potential Broad Theories of Harm.

 

 

  • From Wired: From Fitbits to PlayStations, the justice system is drowning in digital evidence. The article cites experience in Germany, the UK, Australia and Ohio.

 

  • How Artificial Intelligence Will Impact Corporate Communications. “I have seen a glimpse of the future impact of artificial intelligence on corporate communications – and it is good. AI will bring a new level of trust to information, improve the way information is delivered (i.e., via augmented reality and virtual reality apps) and provide better insights and predictive analytics for decision making by corporate communications professionals.” The post is here.

 

  • Here’s more on AI in sentiment analysis. This time from China. China’s largest smartphone maker is working on an A.I. that can read human emotions. According to Felix Zhang, vice president of software engineering, “We want to provide emotional interactions.”

 

  • For your weekend enjoyment, here are posts about AI in the fields of sports, music and weed:

– Xs & Os, 0s & 1s: How Atlanta tech is embedding itself in sports

– From BillboardMusiio uses AI to help the music industry curate tracks more efficiently. “A former streaming industry exec and an AI specialist walk into a bar, they leave starting an AI company for the music industry.”

– How Music Generated by Artificial Intelligence Is Reshaping — Not Destroying — The Industry. “…(I)f we take the long view on how technological innovation has made it progressively easier for artists to realize their creative visions, we can see AI’s genuine potential as a powerful tool and partner, rather than as a threat.”

VantagePoint Software’s Artificial Intelligence Now Forecasts Cannabis Stocks. “Traders can now use the platform’s artificial intelligence-based indicators to profitably trade cannabis stocks.” Story here.

  • In this post, Ken Grady does not predict the future of the legal industry, but he presents an interesting framework for such forecasts.

 

  • If you’re new to Legal AI, you may find this overview from ROSS useful. “The adoption of artificial Intelligence (AI) technology is undoubtedly transforming the practice of law.”

 

  • Here, from HBR is another good background piece. This one describes the process behind AI-based prediction. A Simple Tool to Start Making Decisions with the Help of AI.

 

  • Access to Justice (A2J) news from Artificial Lawyer: “Legal AI pioneer, Neota Logic, has helped to develop an automated doc creation tool for divorce in Australia, in another example of using legal tech to improve access to justice. It is also apparently the first time something like this has been done in Australia. … the application helps people involved in a divorce to complete essential legal documents that they need to submit to court to help complete their separation.”

 

  • Shearman & Sterling via LexologyArtificial Intelligence and Algorithms in Cartel Cases: Risks in Potential Broad Theories of Harm.

 

  • Tel Aviv-based LawGeex, which has developed automated contract review technology…,  is announcing that it has closed $12 million in new investment…. It brings the startup’s total funding to date to $21.5 million.

 

  • Here’s an interesting thought piece from Poland’s Adam Polanowski of Wardyński & Partners: “Although the media often announce revolutions prematurely, and autonomous AI is still a long way off, changes in this field are unavoidable. The only question is how fast the legal profession will be automated.”

 

  • From Artificial Lawyer: “Los Angeles-based, legal AI litigation data analysis company, Gavelytics, has today announced that its proprietary database now extends into San Francisco County Superior Court, which it calls ‘the legal epicenter of one of the most significant business and technology markets in the world’.”

 

  • “Pentagon technology chief Mike Griffin last week announced US military plans to form an office dedicated to procuring and deploying artificial intelligence technology. Meanwhile, the White House still doesn’t have a science adviser.” Details here.
  • From Artificial Lawyer: dealWIP, an “operating system for deals,” …offer(s) ‘a cloud-based workflow integration platform for legal transactions of all types that provides a secure, frictionless and transparent environment for transactional attorneys and their clients’. “The platform will also seek to connect to many other systems a law firm may be using, such as legal AI tools for document review during an M&A deal.” Details here.

 

  • Contrary to the recent move by the UAE, “(a) team of 150 experts in robotics, artificial intelligence, law, medical science and ethics wrote an open letter to the European Union advising that robots not be given special legal status as ‘electric persons,’ The letter says that giving robots human rights would be unhelpful. From an ethical and legal perspective, creating a legal personality for a robot is inappropriate whatever the legal status model”

 

  • Jones Day: “United States: Protecting Artificial Intelligence And Big Data Innovations Through Patents: Functional Claiming.

 

  • From FSTech: “The House of Lords Select Committee on Artificial Intelligence published its AI in the UK: Ready, Willing and Able? report today, with chairman of the Committee, Lord Clement-Jones, noting that the UK has a unique opportunity to shape AI positively for the public’s benefit. “The UK contains leading AI companies, a dynamic academic research culture, and a vigorous start-up ecosystem as well as a host of legal, ethical, financial and linguistic strengths.”

 

  • Malaysia:  “Thanks to a deal between Auxiliary Force and Chinese AI company YITU, certain Polis Bantuan in Malaysia will receive AI-enabled bodycams.  These bodycams will be able to identify wanted parties by the police. These cameras are also able to provide infrared for recordings in dark places, a more compact design, and the ability to potentially livestream bodycams.”

 

  • HoFrom Federal News Radio: “The Pentagon is planning, along with intelligence agencies, on setting up a new office to oversee the acquisition and development of artificial intelligence. Defense officials are alarmed at recent advancements made by U.S. adversaries.”
  • From Artificial Lawyer: Thomson Reuters is again turning to AI tools, now with a contract remediation system to help companies review and repaper legal agreements ahead of Brexit. In this case it will be using AI company Logical Construct, which leverages a combination of natural language processing (NLP) and machine learning techniques to achieve its extraction results.

 

  • From Patent Docs: FDA Permits Marketing of First AI-based Medical Device; Signals Fast Track Approach to Artificial Intelligence.

 

  • SINGAPORE (Reuters) – In the not too distant future, surveillance cameras sitting atop over 100,000 lampposts in Singapore could help authorities pick out and recognize faces in crowds across the island-state. Some top officials in Singapore played down the privacy concerns. Prime Minister Lee Hsien Loong said last week that the Smart Nation project was aimed at improving people’s lives and that he did not want it done in a way “which is overbearing, which is intrusive, which is unethical”.

 

  • Google and AI Ethics: “After it emerged last month that Google was working with the Defense Department on a project for analyzing drone footage using “artificial intelligence” techniques, Google’s employees were not happy.” “(M)ore than 3,000 of the employees signed a letter to CEO Sundar Pichai, demanding that the company scrap the deal.” “Google Cloud chief Diane Greene … told employees Google was ‘drafting a set of ethical principles to guide the company’s use of its technology and products.’” “…Greene promised Google wouldn’t sign up for any further work on ‘Maven’ or similar projects without having such principles in place, and she was sorry the Maven contract had been signed without these internal guidelines having been formulated.”

 

  • House of Representatives Hearing: GAME CHANGERS: ARTIFICIAL INTELLIGENCE PART III, ARTIFICIAL INTELLIGENCE AND PUBLIC POLICY, Subcommittee on Information Technology, APRIL 18, 2018 2:00 PM, 2154 RAYBURN HOB.
  • In so many ways, technology is outpacing regulation, but this week’s questioning of Mark Zuckerberg has many wondering whether the US Congress is capable of intelligently crafting such regulations. They just don’t seem to understand. Also from those hearings, “Mark Zuckerberg said today (April 10) in his testimony before the US Congress that he could see AI taking a primary role in automatically detecting hate speech on Facebook in five to 10 years.”

 

  • From Norton RoseAI is your new document drafter. “Document automation will ultimately save time and costs for clients, allowing the attorneys to focus on more intricate tasks. Document automation is, however, more effective with high-volume, low-complexity documents as there are few efficiency gains where a user is required to complete a detailed 20 page questionnaire to generate a single, complex document. In these cases, it may still be better for the lawyer to draft the document from an existing precedent with old-fashioned drafting notes.”

 

  • From Shearman & Sterling: “Shearman & Sterling publishes its sixth annual Antitrust Annual Report today.” “The EC has been very vocal about its concerns regarding the use of algorithms and artificial intelligence to engage in anti-competitive practices. The report explores possible scenarios and how an investigation would be approached.”

 

  • This sounds like an interesting new product: “Lawyers store their clauses in a multitude of places—email, local folders, old agreements. Clause Companion™ is a clause library that allows for easy storage, retrieval, and distribution of content, from within Microsoft Word and the documents they’re working on.”

 

  • From Artificial Lawyer: “India-based CaseMine, … has now moved forward with a beta version of an English case law analytics system, which ‘enhances traditional legal research to move beyond mere keywords and retrieve relevant results using entire passages and briefs’, driven by the company’s NLP tech.”

 

  • “The Asser Institute in The Hague in collaboration with a number of partners will launch its first Winter Academy on Artificial Intelligence and International Law early 2019. Expert academic speakers will be offering perspectives on primarily legal but also ethical and technical aspects.” Details here.

 

  • Similar to using cameras for ‘sentiment analysis’, “Fidgetology™ rapidly quantifies body language – to assess mental health, or to estimate enjoyment of ads or other media. Brain Power developed Fidgetology in collaboration with Amazon Web Services using the company’s newest cloud-based artificial intelligence (AI), machine learning (ML), and computer visions tools….” Story here.

 

  • Here’s a good example of using AI to stay a step ahead of malware threats. “SE Labs Test Shows CylancePROTECT Identifies and Blocks Threats Years Before Malware Appears in the Wild”
  • Better, faster, cheaper: from Artificial Lawyer, “The UK’s Serious Fraud Office (SFO) has announced a partnership with US-based legal data company, OpenText, to make use of its Axcelerate AI-driven doc review tool.The SFO said in a statement that ‘by automating document analysis, AI technology allows the SFO to investigate more quickly, reduce costs and achieve a lower error rate than through the work of human lawyers alone’.” More here from the Law Society Gazette.

 

  • Smart Contracts: More about the Accord Project: “Freshfields Bruckhaus Deringer, Allen & Overy (A&O), and Slaughter & May joined the Accord Project, which already has some of biggest law firms in the world as members. The project is pushing for the adoption of an open source technical and legal protocol that will accept any blockchain or distributed ledger technology – a so-called ‘blockchain agnostic’ standard.”

 

  • From Reed SmithUK government publishes the Digital Charter and reaffirms creation of the Centre for Data Ethics and Innovation.

 

  • From Knobbe Martens: “According to a U.S. Food and Drug Administration press release, Viz. AI Contact application was granted De Novo premarket review to Viz.AI’s LVO Stroke Platform. According to PR Newswire, Viz.AI’s LVO Stroke Platform is the “first artificial intelligence triage software” and its approval begins “a new era of intelligent stroke care begins as regulatory approval.”

 

 

  • According to American BankerBank of America, Harvard form group to promote responsible AI. “(Cathy) Bessant, the bank’s chief operations and technology officer, wanted to bring in an academic perspective and she wanted to create a neutral place where experts from different sectors and rival companies could discuss AI and craft good policies.”

 

  • Horizon Robotics has debuted a new HD smart camera that boasts serious artificial intelligence capabilities and can identify faces with an accuracy of up to 99.7 percent, the company claims.

 

  • Ever wonder whether the text you’ve used to promote your product or service is perfectly aligned with your brand strategy? Well, “Qordoba, … today announced a revolutionary new capability for scoring emotional tone in product and marketing content.” “Qordoba’s content scoring is based on Affect Detection, a computer science discipline that applies artificial intelligence and machine learning to understand the primary emotion conveyed by written text. …, to identify the emotion associated with a specific combination of words, allowing developers and product teams to create more effective user interfaces (UI).”

 

  • Interesting, from Science Magazine: Could artificial intelligence get depressed and have hallucinations? “As artificial intelligence (AI) allows machines to become more like humans, will they experience similar psychological quirks such as hallucinations or depression? And might this be a good thing?”
  • I’ve posted quite a few things about AI tech that seem to ACI-fi to be true, but this is easily the most far-fetched. I’ve found this on a half dozen sited, includingdirectly from MIT. And none were posted on April Fools Day. Believe it or not, AlterEgo: A Personalized Wearable Silent Speech Interface. More here. If you confirm or prove this story wrong, please let me know.

 

  • Back to reality, Artificial Lawyer reports that the grand finale of the Global Legal Hackathon is at hand.

 

  • This post from Corporate Compliance Insights is my first exposure to Robotic Process Automation (RPA). It’s an interesting analytic tool, described here as used to check for GDPR compliance.

 

  • Elon Musk, there you go again. In a story widely reported this past weekend, “Elon Musk Says AI May Become an Immortal Dictator if Not Regulated.”
  • Another service provider ↔ university partnership is shaping up in the UK: “Legal business DWF is set to launch a Knowledge Transfer Partnership, in conjunction with the University of Manchester, which will see the business further invest in and strengthen its proposition in the realms of legal tech, machine learning and AI to enhance service delivery and develop innovative solutions for its clients.”

 

  • From Dentons: “The 2018 Ontario Budget reaffirms the emphasis on innovative technologies and Ontario’s intention to become a leader in AI.”

 

  • From WallerCryptocurrency and Exchange-Traded Products: Will we see a Cryptocurrency ETP?

 

  • Vendor News: “Angelo, Gordon’s investor relations and compliance teams will use AI-assisted technology from Intapp to categorize the terms specified in each LPA and side letter as new clients are on-boarded, and then employ Intapp Terms to search and check individual investor requirements, and efficiently ensure compliance with those requirements.” Details here.

 

  • Friday “thought piece (page 12)”: ROBOT WARS How artificial intelligence will define the future of news. This is a good exploration of the appropriate roles for AI in journalism. Turns out the best solution is something of a middle ground, neither conceding the profession entirely to the bots or excluding them completely. Good stuff.

 

  • And one more to get you thinking — from MIT: We could easily lose the AI race to countries taking the subject more seriously. This post offers advice as to how we can effectively complete. Here’s how the US needs to prepare for the age of artificial intelligence.
  • From Law360BigLaw’s High Costs Drive Cos. Toward Boutique Law Firms. I expect the same forces that are driving companies to engage smaller law firms (i.e., desire for better, faster, cheaper services) are driving them to Alternative Legal Service Providers (ALSPs) and firms employing technology to better leverage their attorneys’ work.

 

  • From Dentons: From hipster antitrust to Big Data: fresh challenges to competition law? “(The Canadian Competition Bureau) does recognize that big data may facilitate innovative ways of implementing and verifying compliance with a cartel agreement; however, it states that it is “premature” to provide guidance on situations where competitor agreements are achieved through artificial intelligence without the intervention of humans.”

 

  • Cool move by K&L Gates: “Two Carnegie Mellon University faculty members have been appointed to new professorships created with funding from the K&L Gates Endowment for Ethics and Computational Technologies at CMU. The professorships will enable CMU to continue its leadership in the ethical, social and policy issues that arise as artificial intelligence and other computing technologies increasingly reshape society and daily life.”

 

  • From Jones Day: Protecting Artificial Intelligence and Big Data Innovations Through Patents: Functional Claiming. “While patents are not necessarily the only, or the best, protection in a given instance for AI and BD innovation, where patent protection is sought, practitioners need to pay attention to issues of functional claiming as well as issues of subject matter eligibility under Alice in order to execute successful patent coverage for AI and BD innovation.”

 

  • This report (Artificial Intelligence and your business: A guide for navigating the legal, policy, commercial, and strategic challenges ahead) is an excellent piece from Hogan Lovells. It doesn’t dig deeply into any area, put it points out the significant issues raised by AI in several practice areas, client industries and geographies. Marketing done well.

 

  • Here’s a thought provoking post from Attorney at Work: Do Lawyers Have an Ethical Responsibility to Use AI?

 

  • Here’s a quick overview of the evolution of chatbots. “With the continuous evolvement in artificial intelligence, developers are making chatbots more human-like with personalities, capable of recognizing speech patterns and can interpret non-verbal cues to make interactions more effortless.”

 

  • And a thought provoking piece, this time from Above the Law. (I probably should have saved these for Friday). Machine Learning And Human Values: Can They Be Reconciled? “Taken as a whole, artificial intelligence will promote justice and prosperity — but as with any technology, it presents some ethical challenges.”

 

  • Microsoft hasn’t been quite as overt about its AI focus as Google/Alphabet and Amazon, but with its recent reorganization it has become more explicit about its “initiative to integrate artificial intelligence into every product and service offered”. Here’s a good overview of their stated plans.

 

  • From Scientific AmericanIt’s Not My Fault, My Brain Implant Made Me Do It. As AI evolves into “augmented intelligence,” we will have a whole new raft of issues to deal with. “Where does responsibility lie if a person acts under the influence of their brain implant? As a neuroethicist and a legal expert, we suggest that society should start grappling with these questions now, before they must be decided in a court of law.” “Newer implants may have wireless connectivity. Hackers could attack such implants to use Ms. Q for their own (possibly nefarious) purposes, posing more challenges to questions of responsibility. Insulin pumps and implantable cardiac defibrillators have already been hacked in real life.”