• This story arrived in my inbox while I was making a luncheon presentation to a law firm in Toronto. It reports that, “much of the foundational research into artificial intelligence originated in Canada, but we’ll have to work to stay a leader in the field.” It suggests ways to capitalize on that early lead.

 

 

  • From Artificial Lawyer: As Legal Hackathons get underway they need focus to really make a difference. So, from Gillian Hadfield, Professor of Law and Economics at USC, these 10 A2J problems in need of solutions.

 

  • As in cycling where the lead rider makes most of the decisions in a peloton and fights the largest share of wind resistance, so with platoons of long distance trucks. (What does that have to do with AI? I’m getting there.)

From Artificial Lawyer: “In Berlin, legal tech pioneer, Clause, successfully demonstrated a ‘live smart legal contract’ using IoT data … which handled logistics payments for a group of transport vehicles in real time as the audience watched. The amount due was based on the time each truck led the platoon. Payments were then made based on this data, which was executed via a smart contract.

 

Recent studies on the threats posed by AI:

From the Business & Human Rights Resource Center: “(R)eplacing human intelligence with machines could fundamentally change the nature of work, resulting in mass job losses and increasing income inequality. Algorithm-based decision-making by companies could also perpetuate human bias and result in discriminatory outcomes, as they already have in some cases. The significant expansion of data collected and analysed may also result in increasing the power of companies with ownership over this data and threaten our right to privacy.“

 

This report, covered by several international media outlets, The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, with authors from Oxford University’s Future of Humanity Institute; Cambridge University’s Centre for the Study of Existential Risk; OpenAI; the Electronic Frontier Foundation; the Center for a New American Security; and other organizations, “sounds an alarm about the potential malicious use of AI by rogue states, criminals and terrorists. Forecasting rapid growth in cyber-crime and the misuse of drones during the next decade – as well as an unprecedented rise in the use of ‘bots’ to manipulate everything from elections to the news agenda and social media – the report is a clarion call for governments and corporations worldwide to address the clear and present danger inherent in the myriad applications of AI. The report – also recommends interventions to mitigate the threats posed by the malicious use of AI.”

.