• Toronto Startup Blue J Legal Uses AI to Help Predict How Courts will Rule on Employment Law Cases. Details here. (A lot of legal AI is centered in Toronto.)

 

  • “On 30 May 2018, the European Patent Office (EPO) held a ‘first of its kind’ (as it was called by one of the EPO officials) conference on ‘Patenting Artificial Intelligence’.” A summary of the proceedings can be found here. “The main thread connecting all the sessions was the common understanding that AI provides an immense opportunity for innovation. Moreover, the patent system must respond adequately in order to ensure that it does not stifle but enhance such innovation. The main goal of the event was therefore to facilitate a discussion about the challenges in patenting AI.”

 

  • Here‘s the report of some interesting research conducted by Gowling. Among their conclusions, “(d)iscussions concerning pitfalls for the development of the blockchain family were dominated by a single point – its association with Bitcoin.”

The research should be considered qualitative and exploratory as all they say about the methodology is “The research was conducted by BizWord Ltd (www.bizword.co.uk), an independent business consultancy. Specific sources have been listed in the report. To compile the report, we undertook: • A quantitative, online survey, which was sent to FinTech experts in businesses headquartered around the world. • In-depth interviews with a panel of experts during early 2018. • Desktop research and analysis of publicly-available information, industry studies and forecasts.

 

  • Press release: Simpson Thacher Partners with Columbia Business School to Enhance Incoming Associate Training. “Olga Gutman, Co-Chair of the Firm’s Attorney Development Committee, said, ‘The use of artificial intelligence is increasing the efficiency of legal work, particularly that of junior associates. We need to prepare our new associates to perform more advanced legal work for our clients from day one’.”

 

  • From McLane Middleton: United States: Everything Is Not Terminator: Using State Law Against Deceptive AI’s Use Of Personal Data.“Although killer drones and autonomous weapons get the most publicity when it comes to the dangers of artificial intelligence (“AI”),1 there is growing evidence of the dangers posed by AI that can deceive human beings. A few examples from recent headlines (click here for footnote references and the rest of the post):
    • AI that can create videos of world leaders—or anyone— saying things they never said; 2
    • Laser phishing, which uses AI to scan an individual’s social media presence and then sends “false but believable” messages from that person to his or her contacts, possibly obtaining money or personal information; 3 and
    • AI that analyzes data sets containing millions of Facebook profiles to create marketing strategies to “predict and potentially control human behavior.” 4

 

  • From Above the Law5 Myths About Litigation Finance. “On an episode of The Good Wife, a litigation financier had a computer that would take in data about a case and immediately spit out the numbers and parameters for funding. But that’s TV, not the real world. … (W)hen it comes to the underwriting process for litigation finance, humans remain firmly in control.”

 

  • About surveillance and your privacyIt’s RoboCop-ter: Boffins build drone to pinpoint brutal thugs in crowds. (88% accuracy.) “The use of AI for surveillance is concerning. Similar technologies involving actual facial recognition, such as Amazon’s Rekognition service, have been employed by the police. These systems often suffer from high false positives, and aren’t very accurate at all, so it’ll be a while before something something like this can be combined with drones.”