• From Down Under: Dean of Swinburne Law School, Professor Dan Hunter, and Swinburne researcher Professor Mirko Bagaric say artificial intelligence (AI) could improve sentencing procedures by removing emotional bias and human error. Seems these gentlemen are unaware of 2017’s several instances of AI exhibiting bias and even racism. AI has a way to go to fulfill their expectation that “(AI) would also eliminate judicial subconscious bias in sentencing that results in people of certain profiles, such as indigenous offenders, being sentenced more harshly.” They do admit that this transition will take some time, partly because people aren’t yet ready for machines to assume this role. For now, they recommend AI working alongside judges. Their full article can be purchased here.

 

  • Artificial Lawyer reports that “(l)eading legal AI company, Kira Systems, has partnered with document management system (DMS), NetDocuments, to allow its customers to make use of AI-driven analysis tools. The move is part of NetDocuments’ AI Marketplace, which the company says ‘paves the way for enhanced document understanding and matter intelligence’.” Kira says the “strategic objective of Kira is not to repeat what RAVN did with iManage,” but the parallels are hard to miss. Press release here.

 

  • Also from Artificial Lawyer, “Thomson Reuters is introducing a new AI-driven tool, Data Privacy Advisor, powered by IBM’s Watson suite of machine learning technology. The tool is ‘a specialised data privacy research solution that brings the company’s collection of global legal and regulatory information together with expansive data privacy guidance from Practical Law editors, curated news, and a question-answering feature built by artificial intelligence and technology professionals from Thomson Reuters and IBM Watson.’” Here’s the press release. Good point here by Ron Friedmann about this development — can individual law firms undertake this training?

 

  • From law.com: “(this) week kicks off another Legalweek and its flagship Legaltech conference. Alongside panel discussions taking place all week will be a host of companies exhibiting the latest and greatest in legal technologies. A number of companies will debut new products and upgrades while at the show. And while Legaltech News would love to cover them all, we only have so much time (and space!). Here’s a by-no-means-exhaustive snapshot of some of this news.” And here’s a link to Legalweek’s ongoing conference coverage. I expect excellent coverage, especially of all things AI, by LexBlog’s new publisher and editor-in-chief, Bob Ambrogi at LegalweekMonitor.

 

  • Here’s a realistic essay about AI in law from Norton Rose Fulbright South Africa Inc. The basic premise is that “Law firms that are proactive about incorporating AI in providing more value to clients will eventually outrun competitors who ignore it.” Yep.

 

  • I’ve reported several times that the black box nature of most of AI, and our inability to trace back its conclusions through simple if-then logic has cause quite a few problems, not the least of which involves courts’ inability to clearly assign liability. Late last year I reported that AI itself is now being assigned the task of figuring out how to make such explanation possible. This interesting essay from Wired cautions, “Don’t Make AI Artificially Stupid in the Name of Transparency.” It also suggests some work-arounds that might solve the problem, at least in specific situations.

 

  • It seems full realization of the dystopian world of 1984 is getting a bit closer, at least in Dubai. Last November I reported that “Dubai Police will soon be able to monitor you inside your car through an artificial intelligence machine that will be installed on the officer’s vehicle.” Now the “Dubai Police General HQ announced the launch of an artificial intelligence surveillance programme, called Oyoon (Eyes). It is a part of the Dubai 2021 plan and it aims to enhance the emirate’s global position it terms of providing a safer living experience for all citizens, residents and visitors.” “The aim of the project is to create an integrated security system that utilises modern, sophisticated technologies and artificial intelligence features to prevent crime, reduce traffic accident related deaths, and prevent any negative incidents in residential, commercial and vital areas.” I expect that, “prevent any negative incidents” could be interpreted rather broadly.

 

  • Apple and AI: reports suggest that Apple is cutting production of the iPhone X, its smartest phone, in half. Meanwhile, last Friday it finally began selling HomePod, its AI-driven smart speaker competitor to Sonos, Amazon’s Alexa and Google Home with little fanfare and no advertising support, perhaps in expectation that this very expensive rival is doomed from the start. (Mine is scheduled to arrive on February 9. I’ll let you know what I think of it.) All of the recent reviews I’ve seen comparing Alexa, Google Home, Siri and Cortana have Siri and Cortana way back in 3rd and 4th place. Apple seems to be in serious catch-up mode in the race to capture market share with AI-driven devices for consumers.