Identifying assumptions underlying legal arrangements

Legal arrangements rest on behavioural, cognitive, social, and other assumptions regarding their role and function in society and the legal system. The identification and subsequent evaluation of these assumptions is an important task for legal scholarship. In this article, we focus on the identification and categorisation of these assumptions, providing conceptual distinctions and methodological guidance. We distinguish between assumptions about the value(s), norm(s), or interest(s) underlying a legal arrangement, which can be legal or non-legal, and assumptions about the relationship between the legal arrangement and its underlying value(s), norm(s), or interest(s), which can be logical, causal, or contributory. Regarding the identification, we consider explicit references and inference to the best explanation and theory-driven evaluations as possible methods. Inference to the best explanation, we posit, functions as a manner of reconstructing the theory that the person(s) creating a legal arrangement had in mind regarding the place and function of that legal arrangement in society. Given this, we offer a step-by-step approach to reconstructing this theory in use, drawing from theory-driven evaluations and its sources in the social sciences. These distinctions and guidelines can contribute to understanding the context and untangling the complexities involved in identifying the assumptions that underlie legal arrangements.


Frans L. Leeuw and Antonia M. Waltermann, ‘On Identifying Assumptions Underlying Legal Arrangements’, LaM May 2022, DOI: 10.5553/REM/.000067

On the legal responsibility of artificially intelligent entities

Photo by Tara Winstead on Pexels.com

In this paper, I tackle three misconceptions regarding the legal responsibility of artificially intelligent entities, namely that they

(a) cannot be held legally responsible for their actions, because they do not have the prerequisite characteristics to be ‘real agents’ and therefore cannot ‘really’ act.

(b) should not be held legally responsible for their actions, because they do not have the prerequisite characteristics to be ‘real agents’ and therefore cannot ‘really’ act.

(c) should not be held legally responsible for their actions, because to do so would allow other (human or corporate) agents to ‘hide’ behind the AI and escape responsibility that way, while they are the ones who should be held responsible.

Waltermann, A. (2021). On the legal responsibility of artificially intelligent agents: Addressing three misconceptions. Technology and Regulation, 2021, 35-43. https://techreg.org/index.php/techreg/article/view/79