CONFERENCE PROGRAMME
U-Residence, Vrije Universiteit Brussel, Boulevard Géneral Jacques 271, Brussels 1050
and online
All times are CET
Recordings are embedded in the programme below.
Map & directions Thursday 3 Nov Friday 4 Nov Conference home
Thursday 3 November 2022
- 8.30 – 9.00
Registration
- 9.00 – 9.05
Welcome to the Edge
- 9.05 – 10.00
-
Keynote
Mireille Hildebrandt: Computational 'law' on edge
Speaker bio
CRCL22 General Co-chair Mireille Hildebrandt is a Research Professor on ‘Interfacing Law and Technology’ at Vrije Universiteit Brussel (VUB), and part-time Chair of Smart Environments, Data Protection and the Rule of Law at the Institute for Computing and Information Sciences (iCIS) at Radboud University, Nijmegen. In 2015 she published Smart Technologies and the End(s) of Law with Edward Elgar and in 2020 she published Law for Computers Scientists and Other Folk in OA with Oxford University Press. She received an ERC Advanced Grant for her project on ‘Counting as a Human Being in the era of Computational Law’ (2019-2024), that funds COHUBICOL. In that context she is co-founder of the international peer reviewed Journal of Cross-Disciplinary Research in Computational Law, together with Laurence Diver (co-Editor in Chief is Frank Pasquale).
https://cohubicol.com/about/research-team/#mireille-hildebrandt
Coffee (30 mins)
- 10.30 – 11.15
-
Giovanni Sileno, ‘Code-Driven Law NO, Normware SI!’
Reply (law) by Laurence Diver
Abstract
With the digitalization of society, the interest, the research efforts and the debates around the concept of “code as law” — and the complementary “law as code” — have been widely increasing. Yet, most arguments focus on what is expressed by contemporary computational methods and artefacts: machine-learning methods, smart contracts, rule-based systems. Aiming to go beyond this conceptual limitation, this paper proposes to introduce “normware” as an explicit third level to approach the interpretation and more importantly the design of artificial devices, and argues that this provides an irreducible, complementary stance to software and hardware.
- 11.15 – 12.00
-
Denis Merigoux, Marie Alauzen, and Lilya Slimani, ‘Rules, Computation and Politics: Scrutinizing Unnoticed Programming Choices in French Housing Benefits’
Reply (law) by Lyria Bennett Moses
Abstract
The article questions the translation of a particular legal statement, a rule of calculation of personal rights, into a computer program, able to activate the rights of the concerned citizens. It does not adopt a theoretical perspective on the logic of law and computing, but focuses on contemporary welfare states, by studying the case of the calculation of housing benefit in France. Lacking access to Cristal, the source code of the calculation, we replicated the code base and met with the writers of the housing law in the ministries to conduct a critical investigation of the source code. Through these interdisciplinary methods, we identified three types of unnoticed micro-choices made by developers when translating the law: imprecision, simplification and invisibilization. Theses methods also uncover significant social understanding of the ordinary writing of law and code in the administration: the absence of a synoptic point of view on a particular domain of the law, the non-pathological character of errors in published texts, and the prevalence of a frontier of automation in the division of bureaucratic work. These results from the explicitation of programming choices, lead us to plead for a re-specification in the field of law and informatics and a reorientation of the investigations in the field of sociology of law.
Lunch (1 hour)
- 13.00 – 13.55
-
Keynote
Virginia Dignum: A responsible and relational conceptualisation for computational law
Speaker bio
Virginia Dignum is Professor of Responsible Artificial Intelligence at Umeå University, Sweden and director of WASP-HS, the Wallenberg Program on Humanities and Society for AI, Autonomous Systems and Software, the largest Swedish national research program on fundamental multidisciplinary research on the societal and human impact of AI. She is a member of the Royal Swedish Academy of Engineering Sciences (IVA), and a Fellow of the European Artificial Intelligence Association (EURAI). She is member of the Global Partnership on AI (GPAI), World Economic Forum’s Global Artificial Intelligence Council, Executive Committee of the IEEE Initiative on Ethically Aligned Design, of ALLAI, the Dutch AI Alliance, EU’s High Level Expert Group on Artificial Intelligence, and leader of UNICEF's guidance for AI and children, and member of UNESCO expert group on the implementation of AI recommendations. She is author of Responsible Artificial Intelligence: developing and using AI in a responsible way (Springer, 2019).
- 13.55 – 14.40
Bhumika Billa, 'Law as Code and Power: Towards an Information Theory of Law'
Reply (CS) by Michael Veale
Abstract
This article is an inquiry into the informational nature of legal systems and how they communicate internally and externally. For this purpose, I use two external frames — C.E. Shannon’s information theory and Niklas Luhmann’s systems theory — to explore and describe features of the legal systems which otherwise cannot be explained. These features include exclusivity, reflexivity, and adaptability of law. While Shannon’s theory lends the apparatus to make an agent-focused critique, Luhmann’s frame helps conceptualise a broader systemic critique of the law and the way it encodes social reality. In using two of the many possible theories of information and/or communication to explore the nature of legal systems, the paper begins to link law with cybernetic thinking more closely. The choice of theories partially rests in the objective to make an internal critique and test the limits of both computability of law, as well as, the computational approach to problematising law.
Coffee (20 mins)
- 15.00 – 15.45
Sue Anne Teo, 'The Unbearable Likeness of Being: The Challenge of Artificial Intelligence towards the Individualist Orientation of International Human Rights Law'
Reply (CS) by Reuben Binns
Abstract
The increasing use of emerging technologies such as artificial intelligence have raised discrete human rights issues such as the right to privacy, non-discrimination, freedom of expression and data protection. Less explored however are the ways in which artificial intelligence/machine learning (‘AI/ML’) systems are challenging the very core conceptions that sustain the edifice of the contemporary human rights framework. Policy makers and human rights practice proceed from the sufficiency of the human rights regime to tackle such new challenges, operating within a ‘normative equivalency’ paradigm that claims to be able to accommodate the novelty of modalities and harms brought forth by emerging technologies such as AI/ML. However, this paper highlights that one key element of the human rights conceptual framework, its individualist orientation of rights protection, is being challenged.
To set the stage, the paper adopts a socially situated understanding of human rights – acknowledging the socially embedded nature of individuals within societies. The paper argues that social embeddedness was itself an implicit assumption within the rights found in the 1948 Universal Declaration of Human Rights. This conveys individuals in relations with others in communities, situated within organized societies with corresponding political and governance institutions positioned to protect these sets of rights. The ubiquity of emerging technologies such as AI/ML systems appears against this backdrop whereby individuals are increasingly being read, modulated and constructed by such systems. In turn, AI/ML systems promise faster, better, cheaper optimisations of goals and performance metrics across a variety of diverse sectors.
This paper has two objectives. First, it claims that AI/ML are challenging the individualist orientation of the human rights framework through a process of the structural atomization of individuals in ways that are fundamentally misaligned with international human rights law. First, data points that group, infer and construct individuals through her likeness instrumentally atomizes individuals as means for ends in ways that are not of her own choosing, through AI/ML systems wherein the situated individual has little or no say. The algorithmic modulations of AI/ML systems can be opaque, hidden and unknown. Atomization of individuals also affect the experiential comparators necessary to claim a deviation of rights standards.
Second, individuals risk instrumentalization through optimization, wherein the efficiency-driven framing of AI/ML tends to encourage problem solving in ways that respond to computational tractability. Thus, it is not the case that individuals are instrumentalized by AI/ML systems, it is that they cannot help but be instrumentalized when the objective to begained is one of optimisation. Representation about social and physical phenomena is necessarily flattened and the messiness of social and moral contestation is replaced with questions of fair data representation and fairness of AI/ML, compacting incommensurable values into computational optimisations. Rights are however, not (straightforwardly) about optimisations.
Third, the contextual atomization through AI/ML mediated shaping of epistemic and enabling conditions can threaten the condition antecedent for socially situated exercise of moral agency and with it, human rights. Such precarity exposes the inadequacy of human rights responses that focus upon harms through its exogenous (as perceivable and observable) typology instead of structural conditions as enablers of harm. Further, the harm typology accommodated within the human rights framework admits implicit directionality and is thus challenged by the multi-directionality of potential harms and the multi-stability of technologies such as AI/ML. However, even this account is inadequate. The process of mediation between emerging technologies such as AI is a process of co-creation – individuals can be intertwined as co-authors of resulting harms.
While the diagnosis of the problem space is the main focus of the paper, the second objective briefly addresses the possible ways to address the concerns raised. To this end, the paper briefly examines three exemplars of extensions of the solutions space and its normative justifications – through the extension of the individual, extension of rights, and finally extension of the governance space. The paper concludes that an extension of the governance space is best placed to address this challenge.
- 15.45 – 16.00
Wrap up
- 16.00 – 17.00
-
Keynote
Emily M. Bender: How to build language processing applications that work — and expose those that don’t
Speaker bio
Emily M. Bender is a Professor of Linguistics and an Adjunct Professor in the School of Computer Science and the Information School at the University of Washington, where she has been on the faculty since 2003. Her research interests include multilingual grammar engineering, computational semantics, and the societal impacts of language technology. Her work includes the LinGO Grammar Matrix, an open-source starter kit for the development of broad-coverage precision grammars; data statements for natural language processing, a set of practices for documenting essential information about the characteristics of datasets; and two books which make key linguistic principles accessible to NLP practitioners: Linguistic Fundamentals for Natural Language Processing: 100 Essentials from Morphology and Syntax (2013) and Linguistic Fundamentals for Natural Language Processing II: 100 Essentials from Semantics and Pragmatics (2019, with Alex Lascarides). She is the co-author of recent influential papers such as 'Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data' (ACL 2020) and 'On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜' (FAcct 2021). In her public scholarship, she brings linguistic insights to lay audiences to cut through the hype about "AI" and facilitate understanding of the actual functionality of the systems being sold under that name.
17.30 – 18.30 Drinks & bites
Friday 4 November 2022
- 8.30 – 9.00
Registration
- 9.00 – 9.05
Welcome to the Edge
- 9.05 – 10.00
-
Presentation of the COHUBICOL Typology of Legal Technologies
Presented by Mireille Hildebrandt, Laurence Diver, Masha Medvedeva, and Pauline McBride
Coffee (30 mins)
- 10.30 – 11.15
-
Florian Idelberger, ‘The Uncanny Valley of Computable Contracts – Assessing Controlled Natural Languages for Computable Contracts’
Reply (CS) by Katie Atkinson
Abstract
Automated legal relationships of one form or another are the future. One does not have to listen to Legal Tech enthusiasts to come to this conclusion. In some limited settings, such automated legal relations are already here. These embedded, automated legal relations can be as seemingly innocuous as a social media app that implicitly encodes its privacy policy within its computer code. One avenue to make these relations more accessible is the combination of controlled natural languages (CNLs, languages based on existing languages such as English but more restrictive) and programming languages, creating a controlled natural language that reads like (for example) English but can be used to create executable computable contracts. These work by reducing the complexity of the natural language to a manageable size by only allowing a limited set of syntax and semantics.
While CNLs are more precise and more suitable for such a task than machine-learning-based Natural Language Processing, they also have limitations, especially in their inherently limited complexity and when learning and writing them. This contribution sketches the historical development of controlled natural languages and their relation to programming languages and then assesses how valuable CNLs are for computable contracts. In this process, I termed the specific property of CNLs that they are often easy to read but hard to master and write the "Uncanny Valley of Computable Contracts" in analogy to the hypothesised phenomenon described for human reaction to humanoid androids.
- 11.15 – 12.00
-
Mark Burdon, Anna Huggins, Rhyle Simcock, Nic Godfrey, Josh Buckley, Siobhaine Slevin, and Stephen McGowan, ‘From Rules as Code to Legal and Regulatory Coding Strategies’
Reply (CS) by Monica Palmirani
Abstract
‘Rules as Code’ is a broad heuristic that encompasses different conceptual and practical aspects relating to the process of presenting legal instruments as machine executable code, especially for use in automated business systems. The presentation of law as code is often considered an isomorphic exercise that can be achieved through a plain reading approach to interpretation. We report on research findings involving the coding of an Australian Commonwealth statute – the Treasury Laws Amendment (Design and Distribution Obligations and Product Intervention Powers) Act 2019 (Cth) (the ‘DDO Act’) – and the Act’s concomitant regulatory guidance – the Australian Securities and Investments Commission (ASIC) Regulatory Guide 274 (‘RG 274’). We outline the limits of a plain reading approach to legal coding and demonstrate encoding challenges identified in the research. We then used Brownsword’s mindsets to reflect upon the coding exercise and to develop different coding and interpretive approaches that were necessary to resolve the issues encountered in coding the DDO Act and RG 274. The mindset considerations enabled us to outline and delineate distinct legal and regulatory coding strategies that highlight the different cultural contexts and rationales which are embedded in legal instruments, like legislation and regulatory guidance. In conclusion, we contend that different types of coding strategy are required to meet the significant challenges that arise from coding legal and regulatory instruments.
Lunch (1 hour)
- 13.00 – 13.45
-
Zhenbin Zuo, ‘Automated law enforcement: An assessment of China’s Social Credit Systems (SCS) using interview evidence from Shanghai’
Reply (CS) by Cynthia Liem
Abstract
This paper provides one of the first fieldwork-based research that understands China's Social Credit Systems (SCS) through the lens of automated law enforcement. Evidence is drawn from semi-structured interviews with Shanghai-based local government officials, judges and corporate employees, conducted in April 2021. These are actors who supervise, manage, and/or operate Shanghai’s SCS at the level of daily practice. The paper focuses on the use of blacklists and joint sanctions within the wider framework of the SCS. The interview evidence, combined with online archival research, uncovers a more complete understanding than previously available of the detailed workings of these systems and of their impacts, both positive and negative, in the field. The result shows that automation is perceived to have achieved efficient scaling, but also to have negative consequences, including rigidity at the level of code, and perverse or counter-productive incentives at the level of human behaviour. Using an institutional theoretical framework which identifies the role of governance in ‘scaling and layering’, the paper argues that automated enforcement can only achieve scale effects if human judgement is combined with automation. Human agency is needed to continuously realign and re-fit code-based systems to text-driven laws and social norms in specific spatio-temporal environments. In the final analysis, code operates in a path-dependent and complementary way to these other forms of governance. From social norms to laws, to data and to code, governance is layered via formalisation.
- 13.45 – 14.30
-
Maurizio Parton, Marco Angelone, Carlo Metta, Stefania D’Ovidio, Roberta Massarelli, Luca Moscardelli, and Gianluca Amato, ‘Artificial Intelligence and renegotiation of commercial lease contracts affected by pandemic-related contingencies from Covid-19. The Project A.I.A.CO.’
Reply (CS) by Caroline Cauffman
Abstract
This paper aims to investigate the possibility of using artificial intelligence (AI) to resolve the legal issues raised by the Covid-19 emergency about the fate of continuing execution contracts, or those with deferred or periodic execution, as well as, more generally, to deal with exceptional events and contingencies. We first study whether the Italian legal system allows for “maintenance” remedies to cope with contingencies and to avoid the termination of the contract, while ensuring effective protection of the interests of both parties. We then give a complete and technical description of an AI-based predictive framework, aimed at assisting both the Magistrate (in the course of litigation) and the parties themselves (in out-of-court proceedings) in the redetermination of the rent of commercial lease contracts. This framework, called A.I.A.Co. for Artificial Intelligence for contract law Against Covid-19, has been developed under the Italian grant “Fondo Integrativo Speciale per la Ricerca”.
Coffee (30 mins)
- 15.00 – 15.45
José Antonio Magalhães, 'Platforms as law: A speculative approach to the study of law and technology'
Reply (CS) by Cecilia Rikap
Abstract
This paper aims to offer a nomic concept of platform, which is to say a concept of platform from the perspective of legal theory. Its theoretical framework consists in a reconstruction of legal theory under the influence of contemporary speculative theories such as 'speculative realism/materialism', the 'ontological turn' in anthropology, 'new materialisms' and 'assemblage theory'. I start by introducing the search for a legal concept of platform in the context of the 'code as law/law as code' debate. I contrast what I call, on one hand, a correlationist approach to the problem of law and technology and, on the other, speculative legal theory. I then seek to define nomic concepts of platform and of a platform’s associated environment or ‘demic milieu’. I define platforms as law in two different, though interrelated, senses: platforms as coded law and algorithmic governance. I also distinguish between three types of norms active in ‘platforms as law’: coded norms, interfacial norms and demic (or environmental) norms. Finally, I seek to offer a conceptual toolbox that allows us to analyse nomic platforms in their constitutive parts, focusing especially on the concepts of device, application, interface and user.
- 15.45 – 16.45
-
Keynote
Frank Pasquale: Positional Services: Avoiding a Legal Technology Arms Race
Speaker bio
Frank Pasquale is an expert on the law of artificial intelligence (AI), algorithms, and machine learning. He currently serves on the U.S. National Artificial Intelligence Advisory Committee (NAIAC), which advises the President and the National AI Initiative Office at the Department of Commerce. Before coming to Brooklyn Law, he was Piper & Marbury Professor of Law at the University of Maryland, and Schering-Plough Professor of Health Care Regulation & Enforcement at Seton Hall University. He clerked for Judge Kermit V. Lipez of the First Circuit Court of Appeals, and was an associate at Arnold & Porter in Washington, D.C.
Pasquale's 2015 book, The Black Box Society: The Secret Algorithms That Control Money and Information (Harvard University Press), has been recognized as a landmark study in information law. It is cited in fields ranging from law and computer science to sociology and literature. The book develops a social theory of reputation, search, and finance, while promoting pragmatic reforms to improve the information economy. His latest book, New Laws of Robotics: Defending Human Expertise in the Age of AI (Harvard University Press, 2020) analyzes the law and policy influencing the adoption of AI in varied professional fields. Pasquale has also co-edited The Oxford Handbook of Ethics of AI (Oxford University Press, 2020), has edited or co-edited three other books, and co-authored a casebook on administrative law.
- 16.45 – 17.00
Conference wrap up