Events
CUHK LAW CFRED Seminar – ‘Making Algorithmic Advisers Careful: The Content and Scope of the Contractual Duty of Reasonable Care in Automated Advice Provided to Consumers’ by Prof. Jeannie Paterson (Online)
30 Mar 2022
5:00 pm – 6:00 pm
Online (Zoom)
Prof. Jeannie Paterson,
Professor of Law
Centre for AI and Digital Ethics – Co-director
Digital Access and Equity Research Program – Melbourne Social Equity Institute
Melbourne Law School
Jeannie Marie Paterson teaches and researches in the fields of consumer protection law, consumer credit and banking law, and AI and the law.
Jeannie’s research covers three interrelated themes:
1. The relationship between moral norms, ethical standards and law;
2. Protection for consumers experiencing vulnerability;
3. Regulatory design for emerging technologies that are fair, safe, reliable and accountable.
Jeannie has published widely on these research topics in leading journals and edited collections, including as the co-editor, with Elise Bant, of Misleading Silence (2020). Jeannie is also the co-author of a number of leading textbooks: (with Andrew Robertson) Principles of Contract Law (6th ed, 2020), Corones’ Australian Consumer Law (2019) and (with Hal Bolitho and Nicola Howell) Duggan and Lanyon on Consumer Credit Law (2020). Her scholarly work has been cited by courts, including the High Court of Australia and the Supreme Court of Canada.
Jeannie completed her BA/LLB(Hons) at ANU and her PhD at Monash University. She previously lectured at the Faculty of Law at Monash University and prior to that time was a solicitor at Mallesons Stephen Jacques (now King & Wood Mallesons). Jeannie holds a current legal practicing certificate and regularly consults to government, regulators and not-for-profit organisations.
Jeannie is a Fellow of the Australian Academy of Law. She is an editor for consumer protection in the Australian Business Law Review, and the Journal for Law, Technology and Humans.
‘Algorithmic advisers’ are digital tools that provide solicited, personalised recommendations or advice to consumers generated by algorithms of varyingdegrees of sophistication. Examples include comparison websites, advisory apps,chatbots, and robo-advisers. Algorithmic advisers offer considerable potential forsupporting welfare enhancing choices by consumers, particularly in complex, high-cost or high-risk contexts, such as in finance, insurance, legal or healthcaresettings. They also carry risks of harm. Some of these risks are those that have beenraised in digital markets more widely, namely the potential for data harvesting, lossof privacy and bias. Other risks are specific to advisory services generally, inparticular the risk of self-dealing. Additionally, algorithmic advisers may simply failto provide the service they have been contracted to provide. In using algorithmicadvisers, consumers are seeking advice that is personalised to them. They expect anuanced and specific response. But commonly they are poorly placed to scrutinisethe quality of what they receive. Consumers turn to algorithmic advisers for thevery reason that they themselves lack the skills to navigate the field in question. Inprinciple the response lies in the conduct and service standards provided bylaw. Firms providing algorithmic advice may be subject to fiduciary duties thatdemand utmost loyalty. They will be subject to an implied contractual duty ofreasonable care and skill. But there is a question about how this duty applies and isassessed when the advice in question is automated. What should be demandedfrom that advice and from the firm that is providing the automated service?Drawing on technical insights on algorithmic auditing and explainability, thispaper considers the content and expectations of the duty of reasonable careapplying to algorithmic advice. It further explores the related question of theextent to which the contract terms can and should define the scope of this duty.
Language: English
CPD credit is available upon application and subject to accreditation by the Law Society of Hong Kong (currently pending).