Goals
The COHUBICOL project was focused on what the proposal called the ‘uncountable countabilities and unaccountable countabilities’ of computational law. The latter is subdivided in data-driven and code-driven law, with the first referring to technologies involving machine learning and the second referring to the translation of legal norms into computer code, to enable automated enforcement and/or compliance. The first engages with legal search engines and the prediction of judgments; the second engages with Rules as Code (RaC) and automated decision making (ADM) in the context of public administration. In other words, the project aims to investigate how the transmutation of incomputabilities in the domain of law into computable assets, variables, probabilities, flow charts and executable code will affect key tenets of the rule of law, more specifically the legal protection offered by law-as-we-know it, that is modern positive law.
The overarching goal of the research is to prepare a new legal hermeneutics, i.e. a new theory and a new practice of legal interpretation for the upcoming hybrid of ‘traditional’ and computational law.
This involves three key intermediate goals: (1) a new understanding of the nature of legal protection in traditional, text-driven law, (2) an analysis of the potential for (or the lack of) legal protection when legal search engines or prediction of judgment is deployed by courts or law firms and (3) an analysis of the potential for (or the lack of) legal protection in legislation-as-software (RaC) and automation of enforcement and compliance (ADM).
The outcome of (1) has been published in the Research Study on Text-Driven Law; the outcome of (2) and (3) has been published in the Research Study on Computational Law, both have been published in Open Access on the COHUBICOL website in the form of a pdf and in html. Both have ‘pull quotes’ that offer a quick overview of the main points made in the text. The text is thus designed to offer both a kind of random access to the content, triggering the reader’s attention to key findings and the in-depth argumentation in the main text. On top of this, the HTML text has been designed as a new way of offering scientific output, by offering (1) a menu that links the Studies to e.g. the vocabularies of foundational legal concepts and the key computer science terminology and the Typology of legal technologies, (2) a table of contents on the side for a quick overview and easy navigation and (3) the affordance of deeplinking to a specific section or even paragraph, which increases the hyperlinkability of the text, allowing easy dissemination of and interaction with the text. The new way of offering scientific ‘content’ has been developed by postdoctoral researcher Laurence Diver, in close collaboration with the PI. Diver’s background in both law and web development has been an invaluable contribution to the project, especially in rethinking the presentation and dissemination of the findings of scientific research.
The outcome of (2) and (3) has been developed and embedded in the Typology of Legal Technologies, an online webtool that offers a method, mindset and resource to map, compare and assess specific types of legal technologies, with a number of filters to quickly locate e.g. either data- or code-driven technologies or hybrids thereof, or to select a specific target audience for such technologies, e.g. courts, general counsel, legislators, litigators, or to select a specific functionality, e.g. automated decision making, compliance support, legal expert systems, litigation (prediction of judgment, analytics or drafting). On top of that one can elicit one of three types of technologies (a) an application, (b) a scientific paper and (c) a training data set. Each technology is defined in terms of (i) intended users, (ii) code- or data-driven or both, (iii) format (e.g. component or platform), (iv) automation or support, (v) in use or not, (vi) type of developers and or providers (e.g. academics, technical folk, law firm), (vii) jurisdiction of the developers, (viii) target jurisdiction and (ix) access (e.g. licence, open access). This allows for a fine-grained and in-depth mapping of different types of systems, allowing relevant comparison and laying the groundwork for a more in-depth assessment. Such assessment is achieved by raising and answering two questions: (A) what functionality is claimed on behalf of the legal technology (based on publicly available claims made on websites, documents with links of the relevant webarchive page) and (B) how this functionality is substantiated (based on available evidence on the provider’s website, relevant repositories such as github or scientific papers) or could be substantiated (based on in-depth research by the computer scientists in close collaboration with the lawyers). Only after this preliminary assessment, which requires close collaboration between domain experts (lawyers) and technical and scientific expertise (computer science), the potential technical issues and the potential legal impact are mapped. In the project application, I have proposed a cross-disciplinary approach, to be distinguished from inter- or multi-disciplinary approaches. The research into the question of how functionality claims can be substantiated is a concrete application of such cross-disciplinary research. Instead of trying to develop a shared vocabulary we have made the effort of explaining the terminology and related methodology of our own discipline to ‘the other discipline’. This is a precondition for understanding whether and how functionality claims can be assessed, first, in terms of what the technologies actually afford and second, in terms of what those affordances mean for law, the rule of law and legal protection.
The Typology thus presents and demonstrates a new legal hermeneutics for computational law, together with an explanation of ‘how to use’, ‘FAQs and methodology’, ‘teaching and training resources’ and ‘outreach activities’. This hermeneutics is ‘legal’ because it traces the effect that the deployment of various types of legal technologies will have on legal effect, it is hermeneutical because it requires an interpretation of the affordances of technical tools other than those of the script or the printing press. The Research Studies make the theoretical argument for such a cross-disciplinary hermeneutics, building on the practice of developing and deploying this hermeneutics. The deployment has been demonstrated in two research blogs where the Typology has been applied to new technologies that are not part of its database: first, to the case law tracker that is made available on the Netherlands recht.nl website (a legal search engine) and second to Casetext’s Co-counsel that offers myriad functionalities based on generative AI (in close collaboration with OpenAI). These research blogs demonstrate why the Typology is not just a resource or a database but rather a method and mindset that can and should be applied to test novel legal technologies against the key questions raised in the Typology: what functionality is claimed, (how) can it be substantiated, what technical issues are at stake and – crucially – what may be the legal impact on law, legal protection and the rule of law.
Objectives
The overarching goal of preparing a new legal hermeneutics has been subdivided into three objectives:
Objective 1: Detecting the assumptions of text-driven, data-driven and code-driven law
The first objective targets the assumptions of text-driven, data-driven and code-driven law, uncovering relevant differences. The assumptions we seek to identify are not necessarily explicitly adhered to or even admitted. Instead, they constitute the implicit backbone of various disciplinary, professional, cultural and individual practices. They require philosophical inquiry and reflection as they concern the assumptions of empirical and theoretical investigations rather than their outcome.
- This objective has been key to our research and underpins the cross-disciplinary approach. In the Research Study on Text-Driven Law we have explored the assumptions of text-driven law, more notably the nature of modern positive law and how those assumptions relate to the ambiguity inherent in natural language that is enhanced by the technological infrastructure of the printing press. The assumptions of text-driven law can be summarised as the inescapable multi-interpretability of the law, which grounds and enables adversarial procedures and informs the foundational contestability of legal decision making. The assumptions of code-driven law, however, concern the possibility of an isomorphism between written legal norms and the software that represents or executes them, suggesting that the act of translation is lossless and has no implications for the meaning of the legal norm or the legal effect it stipulates. The assumptions of data-driven law, in turn, concern the similarity of the distribution of training data (e.g. legal text corpora, i.e. language behaviours) and that of test data (future text, i.e. future language behaviours), coupled with the assumption that legal text data is a lossless proxy for the actions of courts (both in the past and the future). In the Research Study on Computational Law, these assumptions are traced and foregrounded, as they have major implications for the mode of existence of law and the rule of law. In the Typology of Legal Technologies, the assumptions of computational law surface when investigating the substantiation of claimed functionalities, as such substantiation depends on the isomorphism between legal text and the relevant software code, on the similarity between the distribution of training and test data and on the appropriateness of the legal text data as a proxy for legal decision making. If these assumptions are incorrect, the claims made cannot be substantiated.
- Key publications that explain the findings regarding this objective: Laurens Diver’s 2022 monograph Digisprudence: Code as Law Rebooted, Tatiana Duarte’s 2022 ‘Google and Apple Exposure Notifications System: Exposure Notifications or Notified Exposures?’ in Privacy Technologies and Policy, Mireille Hildebrandt’s 2021 ‘The Issue of Bias. The Framing Powers of Machine Learning’, in Machines we Trust, her 2021 ‘The Adaptive Nature of Text-Driven Law’ in Journal of Cross-Disciplinary Research in Computational Law, and her 2022 ‘Qualification and Quantification in Machine Learning. From Explanation to Explication’ in Sociologica and her 2023 ‘Boundary Work between Computational “Law” and “Law as-We-Know-It”’ in Data at the Boundaries of European Law, Emilie van den Hoven 2022 ‘Making the Legal World: Normativity and International Computational Law’ in a special issue of Communitas and Paulus Meessen’s 2023 ‘On Normative Arrows and Comparing Tax Automation Systems’ in the Proceedings of the Nineteenth International Conference on Artificial Intelligence and Law (ICAIL).
- The first Philosophers’ Seminar on ‘Text Driven Normativity and the Rule of Law’ of 2019 was dedicated to exploring this objective: https://www.cohubicol.com/about/philosophers-seminars/
Objective 2: Tracing the implications of text-driven, data-driven and code-driven normativity for law
The second objective targets the implications and consequences of computational law, notably the transformation of text-driven normativity to data-driven and code-driven normativity. The findings of this research will clarify how the act of translation (of written and unwritten legal norms into data-driven and codedriven applications) transforms the meaning of core legal concepts and how this will affect the mode of existence of positive law and the Rule of Law.
- This objective moves from analysis and reflection to speculation and argumentation. It does not ask for data-driven predictions about the consequences of introducing legal technologies, nor does it ask for empirical research. Instead, it asks what the assumptions uncovered under the first objective imply for law-as-we-know-it and more specifically for the rule of law and legal protection. Speculation, however, should not be understand as the result of fantasies, but rather as the result of an in-depth understanding of the role of assumptions in scientific research and a rigorous anticipation of their potential implications when they ground real world applications. Speculation asks what kind of transformations are to be expected due to the integration of technologies that entail entirely different assumptions about the meaning of law, moving from assuming the multi-interpretability and contestability of legal normativity to a normativity built on the isomorphism between legal norms and software code and on the equivalence between historical and future distributions of relevant data? In a sense this is a move from ‘meaning’ as action to ‘meaning’ as behaviour, that is from a Peircean and Wittgensteinian understanding of meaning to a behaviour understanding of meaning. This implies a move from a law that is dynamic and built on written and unwritten speech acts that have a performative effect (Austin’s ‘doing things with words’) to a law that is static and built on historical choices and historical behaviours, summarized in the title of my chapter on ‘Scaling the Past while Freezing the Future’, published in the volume Is Law Computable?, edited by Deakin and Markou. The main implication is that contestation and thereby legal protection would become illusionary if the assumptions of isomorphism between written and coded law and of similarity of data distribution between past and future replaced the multi-interpretability of legal norms. We can see this implication surface under ‘potential legal impact’ in the Typology, which has been explored in the third philosophers’ seminar under the heading of ‘effect on legal effect’. The notion of ‘effect on legal effect’ is directly connected with transformations in the ‘mode of existence’ of modern positive law, based on the insight that different technologies building on different assumptions will have different affordances.
- Key publications that explain the findings regarding this objective: Laurence Diver 2021 ‘Computational legalism and the affordance of delay in law’ in the Journal of Cross-Disciplinary Research in Computational Law’, Laurence Diver and Pauline McBride 2022. ‘Argument by the Numbers. The Normative Impact of Statistical Legal Tech’ in Communitas, Gianmarco Gori’s, 2023 ‘The Legal Protection Debt of Training Datasets’ a report published with the EU Horizon project HumanE AI Net and his 2024 forthcoming. ‘Legal and Computer Rules: An Overview Inspired by Wittgenstein’s Remarks’ in Wittgenstein and Artificial Intelligence, Mireille Hildebrandt’s 2023 ‘Text-Driven Jurisdiction in Cyberspace’ in Transformations in Criminal Jurisdiction. Extraterritoriality and Enforcement and her 2023 ‘Ground-Truthing in the European Health Data Space’ in the Proceedings of the 16th International Joint Conference on Biomedical Engineering, Systems and Technologies, Emilie van den Hoven’s 2021 ‘Hermeneutical Injustice and the Computational Turn in Law’ in the Journal of Cross-Disciplinary Research in Computational Law and Pauline McBride and Laurence Diver 2023 ‘ChatGPT and the Future of Law’ in the Law Society of Scotland.
- The second and third Philosophers’ Seminar of 2020 (‘Interpretability issues in Machine Learning’) and 2021 (‘The Legal Effect of Code-driven Law’)were dedicated to exploring this objective, regarding data-driven law (2020) and code-driven law (2021): https://www.cohubicol.com/about/philosophers-seminars/
Objective 3: Novel conceptual tools: ‘affordance’, ‘modes of existence’ and ‘legal protection by design’
The third objective is the development of novel conceptual tools that highlight the relational and ecological nature of law in its changing environment. The concepts will help to identify the assumptions and to trace the implications of text-driven and computational law. The concepts of affordances, modes of existence and legal protection by design, will be used to situate law, individual human beings and society in their technological environment. By developing a relational and ecological perspective these tools will help to rethink and reconstruct legal protection in the architecture of data-driven and code-driven law. This objective will thus contribute to the innovation of legal methodologies, notably by avoiding naive transplants from data science into law. This objective will thereby contribute to a new theory and practice of interpretation (hermeneutics) for both types of computational law, which should enhance our understanding of:
- how computational law can be used to augment rather than replace human legal intelligence;
- whether, when and how computational law could nevertheless be used to replace human search, analysis, and decision-making in a legal context without jeopardizing the central tenets of the Rule of Law;
- how the underlying techniques and technologies can be made testable and contestable to support their fair, transparent and accountable employment of computational law; and, finally
- how the collaboration between lawyers and computer scientists can contribute to an emerging legal framework that affords equal respect and concern, making sure that each person counts as a human being in law, notably protecting what is not countable.
- This objective aims to test three interrelated conceptual tools as a new prism for both the analysis of assumptions and the speculation about implications, offering instruments to assess how legal technologies can be integrated in ways that (A) augment legal imagination rather than destroying it, that (B) address the digital transformation of legal practice by hyperlinking legislative text to relevant case law, to applicable fundamental rights rather than mimicking the citation standards of scientific research that have replace qualitative with quantitative assessments, that (C) make legal technologies testable and contestable rather than requiring trust in unsubstantiated claimed functionalities and that (D) contribute to a new kind of cross-disciplinary collaboration rather than adopting hidden assumptions that denaturalise law while infringing the rule of law and obstructing legal protection. The Research Study on Text-Driven Law pays keen attention to the affordances of the technologies of the word that ground modern positive law, explaining current law’s mode of existence as an implication of those affordances. The Research Study on Computational Law takes this a step further by analysing and demonstrating how various types of data- and code-driven legal technologies afford such a transformation and a concomitant loss of legal protection. The prize-winning NLLP23-paper by Medvedeva and McBride argues how this transformation may be informed by an appropriate research design, thus introducing some requirements for legal protection by design.
- Key publications that explain the findings regarding this objective: Laurence Diver’s 2021 ‘Interpreting the Rule(s) of Code: Performance, Performativity, and Production’. MIT Computational Law Report, Gianmarco Gori’s 2020. ‘Promoting Artificial Legal Intelligence While Securing Legal Protection: The Brazilian Challenge’, in Research Blog COHUBICOL, Mireille Hildebrandt’s 2021 ‘Understanding Law and the Rule of Law: A Plea to Augment CS Curricula’. Communications of the ACM 64 (5): 28–31. https://doi.org/10.1145/3425779, her 2023. ‘Grounding Computational “Law” in Legal Education and Professional Legal Training’ in Research Handbook on Law and Technology, Pauline McBride and Masha Medvedeva 2023 ‘Casetext’s CoCounsel through the Lens of the Typology’, Research Blog COHUBICOL and their 2023. ‘Legal Judgment Prediction: If You Are Going to Do It, Do It Right’ in Proceedings of the Natural Legal Language Processing Workshop 2023, Emilie Van Den Hoven, Paulus Meessen and Masha Medvedeva 2022 ‘Caselaw Revisited: Recht.Nl’s Case Law Tracker Assessed with the Typology of Legal Technologies’ Research Blog OHUBICOL
- The fourth Philosophers’ Seminar of 2023 (‘Compliance and Automation in Data Protection Law’) was dedicated to exploring this objective: https://www.cohubicol.com/about/philosophers-seminars/.