Skip to content

Contesting automated decision making in legal practice: Views from practitioners and researchers

Rapid summary report from an expert workshop: 'Contesting AI Explanations in the UK', 6 May 2021

     

Authors: King's College London and BIICL

Is UK Artificial Intelligence (AI) policy too speculative? Have we underestimated relevant tech change as something generating legal problems now, rather than ethical dilemmas in the future? King's College London's Dickson Poon School of Law and the British Institute of International and Comparative Law (BIICL) have been collaborating on events to consider these and related questions over the first half of 2021, as part of BIICL's ongoing series on AI.

The first of the joint BIICL-King's events was a public panel event on 24 February 2021, details of which are available here . Legal challenges involving automated decisions and other AI-related developments, especially implicating public authorities, have become increasingly prominent in the UK over recent years. This lively discussion indicated that a wide range of significant legal issues are relevant. It suggested that broader social context is important, with issues emerging differently in different sectors of activity. Above all it showed that getting a grip on the issues requires asking questions 'upstream', including more political levels of decision-making which determine in advance where and how relevant systems are deployed. The discussion conveyed a general sense that the law has more to add than just trying to clear up the mess after things have already happened.

A second event was planned accordingly as closed expert workshop on 6 May 2021 to consider the issues in more detail. This rapid summary report records immediate outcomes from the workshop based on the organisers' impressions, primarily for interim use pending reporting and recommendations based on full analysis.

There was a warm response to the workshop and over 40 participants on the day. This second workshop opened with a conversation between two prominent commentators in the field. Its focus was five sector group discussions, covering: Constitution; Criminal Justice; Health; Education; and Finance. Each group was led by a research collaborator and included participants from activist, practitioner and other backgrounds as well as from within academia.

General observations

The governance of automated decision making in the UK, in particular through forms of artificial intelligence (AI/ADM), appears to have shifted from being a question of ethical principles to being a problem of ordinary legal practice without sufficient attention to how the governance of AI/ADM through law should be made effective and legitimate.

Especially in view of the deep reliance of public bodies on the AI/ADM capacities of private sector, inattention to key questions of law-based governance is a matter of concern. Questions include: to what extent should the necessary transparency and accountability of AI/ADM be achieved through legal rights that afford direct public contestation of automated decision making; and how can effective public contestation of AI/ADM be achieved without disproportionate burdens on its beneficial uses?

The workshop was successful in the following objectives:

  • Gathering a range of views from practitioners and researchers on the experience of AI/ADM contestation in legal practice.
  • Raising the question of effective public agency in public interest contestation of AI/ADM.
  • Shedding light on the dispersed, formative processes of law-based governance of AI/ADM.
  • Focussing on sectoral differences in law-based governance of AI/ADM.

Opening conversation

Swee Leng Harris (Luminate / Visiting Senior Research Fellow at King's) engaged Prof. Frank Pasquale (Brooklyn Law School) in a conversation based on the agenda themes. This was an advanced conversation addressing cutting-edge issues in the field from a global perspective. The organisers' impressions from the conversation included the following:

  • The UK is far from alone in experiencing public controversies over automated decision-making (the example was given of the Robodebt scandal in Australia*1.).
  • It is too early to say whether the European Union's proposed AI Regulation will be effective, but the initiative certainly warrants close attention.
  • These are dynamic problems, with technological change affecting how legal processes are administered*2.  and regulation becoming more difficult as systems become more sophisticated.
  • Potential harms appear to be closely bound up in some of the prospective benefits (for example, risks of discrimination and bias in prospective efficient process personalisation).
  • There is greater potential for law to 'help direct - and not merely constrain' the development of AI*3. , especially in applications with higher apparent risk of harm (for example drawing on policy experience with pharmaceutical licencing) -especially where harms are diffuse (for example, drawing on experience with environmental law).

Sector group discussions

Each of the five sector group discussions addressed a common agenda and defined terms, focusing on three themes using the following questions:

  1. to what extent are legal actions implicated in contestations of current or potential AI applications in this sector?
  2. are potential harms from AI well understood and defined in the sector?
  3. how might reflections on legal actions and potential harm definitions improve relevant governance processes?

The intention in using sector groups was to enable comparison between sectors. However exact definition of sector scope was left to the discretion of the group leads, as follows:

Constitution:  Joe Tomlinson, University of York
Criminal Justice:  Pete Fussey, University of Essex
Health:  Claudia Pagliari, University of Edinburgh
Education:  Hanna Smethurst Newcastle University
Finance:  Sir William Blair, Queen Mary, University of London

Theme 1: legal contestation

Most of the groups considered that AI-related legal contestation is likely to increase over the years to come. It was apparent from the discussions that examples have already begun to proliferate, including for example:

  • Questioning aspects of automation in the Universal Credit benefits system, notably in the Johnson case which demonstrated harm to single working mothers*4.
  • Challenging bias in the application of an algorithm used to process visa applications, with the Home Office unable to show that its treatment of applicants was not racist*5.
  • Contributing to the U-turn following the 2020 A-level results 'fiasco', which apparently affected the prospects of tens of thousands of students*6.
  • Disputing police uses of live facial recognition software in street video surveillance, especially through the Bridges case which showed breaches of fundamental rights*7.
  • Contesting the manner in which tech companies have been given access to peoples' NHS data, for example in the Royal Free investigation or the Palantir action*8.
  • Requesting closer consideration of automated systems for the administration of educational assessments, through an action concerning their use in the Bar Exams*9.
  • Beginning to address the special problems involved in commercial contracting for AI, noting the Tyndaris case in particular (although this was settled out of court)*10.

The discussions tended to confirm the perspective of the panel event, that arguments over AI in law implicate a broad range of fields. It appeared from the Constitution discussion that there are profound implications for public administrative law and that judicial reviews of public body decisions are playing a leading role in legal contestation of AI. Once again, transparency was discussed as an important foundational objective of relevant actions (implicating Freedom of Information and disclosure).

Some of these actions were considered to have been highly successful in terms of helping people assert their rights. However the discussions tended to emphasise the difficulty and uncertainty of bringing claims, with significant power imbalances affecting outcomes in seemingly more benign settings such as the Education system as well as in fields such as Criminal Justice where the relevance of coercive relationships is well-recognised.

Broadly speaking, levels of contestation seemed proportionate to the degree in which relevant technologies were regarded as having been developed in a sector (though with some delay for awareness to grow). For example, in both Health and Education Covid-19 effects were seen to have spurred technological deployments in ways that have since motivated challenges. One interesting apparent anomaly was the Finance group, in which algorithmic decision-making and automation were considered to be relatively advanced (including the development of machine learning technologies ) but in which levels of contestation appear to be relatively low.

Theme 2: definition of harms

This theme provoked some of the deepest and most detailed discussions in the groups. It was mentioned that there have been initiatives aimed at clearly defining some relevant harms in law, for example: from profiling or solely-automated decision-making with legal or similarly significant effects in data protection regulation; or from algorithmic trading without effective systems or risk controls under the MiFID regime. However there were other aspects that were mentioned as clearly still reflecting considerable uncertainty, for example: how to classify software in medical device regulatory systems; and how consumer protection rules should adapt for contracting in which individuals' understanding of 'data-for-services' bargains maybe limited.

One clear impression across the groups was that discussions tended to include contemplation of the risk of institutional harms, or harms involving apparently profound but poorly-understood aspects of social systems. The Health group considered unintended consequences, using the example of appointment attendance probability scoring as a seemingly innocuous administrative initiative with clear potential to drive exclusion. Questions about diffuse harms recurred across groups, including the idea that aggregation of individual harms at the collective level is easier in some situations than in others (for example, benefits vs. privacy). But harm diffusion was discussed in the greatest detail for Criminal Justice. The presumption of innocence was noted as an example of an important principle vulnerable to subversion in some AI applications. The idea of diffusion of responsibility through deployment of AI systems was raised as something that might serve social purposes as a form of 'agency laundering'. Questions were raised about the risks of vicarious harms (or 'chilling effects'), questioning assumptions about the limitation of harm to 'subjects' of AI by implying that there may be risks to people working with relevant systems too.

Various comments underlined the general view that relevant harms remained rooted in human responsibility rather than arising in any particular technological flaw. It was noted that human psychological bias remains relevant, not just at the individual level in terms of excessive trust in or suspicion of system outputs, but in terms of collective decisions about whether or how to deploy technology for specific purposes (especially where these appeared have dubious justifications in scientific terms). It was suggested that the need to monitor and evaluate system outcomes from AI systems tends to be underestimated as an integral (and potentially resource-intensive) part of safe and responsible AI deployment.

Theme 3: governance processes

This theme yielded a large number of observations and suggestions, and will provide a focus of analytical efforts to propose overall policy recommendations from the workshop. Immediate impressions included:

  • There are widespread expectations that this is a policy area that will develop dynamically, both in the short term given government initiatives and the EU's recent announcements and in the long term given the fundamental nature of relevant technological changes.
  • The government is attracting a great deal of attention as a respondent in AI-related legal contestation, relative to private organisations; whereas on the claimant side it is civil society activism that has become more prominent, and generally speaking public regulators (notably the Information Commissioner's Office) are seen to prefer guidance over action.
  • Legal administration more than the law itself is regarded as the main deficiency in terms of sufficiency when it comes to AI-related issues. Apparently for a combination of various different reasons, there has been a lack of energy about testing relevant issues in law.
  • The question of 'upstream' decision-making is important, but should include checks and balances throughout relevant decision processes rather than only seeking prior authorisation.
  • Although well-intentioned ethics panels have made some valuable contributions within organisations and policy processes, alone they are unlikely to continue to provide sufficient legitimacy.
  • It is important that questions about specific AI applications include: should we be using this technology for this at all, at least for the time being?

Generally speaking, it was apparent to the organisers that legal professionals' capabilities and experience have a great deal to offer in terms of fresh perspectives on and potential enlarged contributions to UK AI policy. Future efforts in this work will seek opportunities to build on the strengths of the small research network developed for event implementation.

Event organisers

This event was organised by Archie Drake, King's College London and Dr. Irene Pietropaoli, BIICL in partnership between BIICL (led by Prof. Spyros Maniatis, with Anuj Puri supporting) and the King's College London Dickson Poon School of Law (led by Prof. Perry Keller). King's work on this event was supported by the EPSRC under the Trust in Human-Machine Partnership (THuMP) project EP/R033722/1.

Our thanks to Prof. Frank Pasquale, Swee Leng Harris, Dr. Joe Tomlinson, Prof. Pete Fussey, Dr. Claudia Pagliari, Hanna Smethurst, Sir William Blair and to all of the participants for their time and thoughtful contributions.

Notes:

*1.See for example: https://www.bbc.co.uk/news/world-australia-54970253 ; https://gordonlegal.com.au/robodebt-class-action/robodebt-faqs/ 
*2. Susskind, R., & Susskind, D. (2015). The Future of the Professions: How Technology Will Transform the Work of Human Experts. Oxford University Press.
*3. Pasquale, F. A. (2019). Data-Informed Duties in AI Development, Columbia Law Review, 119, 24. https://papers.ssrn.com/abstract=3503121 
*4. https://cpag.org.uk/welfare-rights/legal-test-cases/universal-credit-assessment-period-inflexibility  
*5. https://www.foxglove.org.uk/news/home-office-says-it-will-abandon-its-racist-visa-algorithm-nbsp-after-we-sued-them 
*6. https://www.bbc.co.uk/news/uk-53826305 
*7. https://www.libertyhumanrights.org.uk/issue/legal-challenge-ed-bridges-v-south-wales-police/ 
*8. https://ico.org.uk/about-the-ico/news-and-events/news-and-blogs/2019/07/royal-free-nhs-foundation-trust-update-july-2019/ ; https://www.opendemocracy.net/en/ournhs/weve-won-our-lawsuit-over-matt-hancocks-23m-nhs-data-deal-with-palantir/ 
*9.https://blog.okfn.org/2021/02/26/open-knowledge-justice-programme-challenges-the-use-of-algorithmic-proctoring-apps/
*10. Tyndaris v MMWWVWM Ltd [2020] EWHC 778 (Comm) (22 April 2020)
*11. https://www.fca.org.uk/publication/research/research-note-on-machine-learning-in-uk-financial-services.pdf 

This rapid report was prepared by Archie Drake and Prof. Perry Keller of King's. It highlights impressions from the event, but it is necessarily partial and does not reflect individual contributions. 21 May 2021

Join the conversation

Comments are not enabled for this blog.

-
Donate Now Keep In Touch
Save and continue