One-Day Workshop on AI & Evidential Inference
in Conjunction with
ICAIL 2011, Pittsburgh, Pennsylvania, June 10, 2011

Schedule of Talks

 



Papers & Abstracts


Ronald J. Allen, "Taming Complexity:  Rationality, the Law of Evidence, and the Nature of the Legal System"
This essay explores the implications of complexity for understanding both the law of evidence and the nature of the legal system. Among the propositions critically analyzed is that one significant way to understand the general problem of the meaning of rationality is that it has involved a multivariate search for tools to understand and regulate a hostile environment. The law of evidence is conceptualized as a subset of this effort, at least in part, as involving a search for tools to regulate the almost infinitely complex domain of potentially relevant evidence and at the same time to accommodate policy demands. The proposition is then considered that the legal system of which the evidentiary system is a part has emergent properties that may not be deducible from its component parts and that suggest that it may be, or at least has properties highly analogous to, a complex adaptive system. One implication of this analysis is that the tools of standard academic research that rely heavily on the isolation and reduction of analytical problems to manageable units to permit them to be subjected to standard deductive methodologies may need to be supplemented with analytical tools that facilitate the regulation of complex natural phenomena such as fluid dynamics. This has direct implications for such things as the conception of law as rules, and thus for the Hart/Dworkin debate that has dominated jurisprudence for 50 years. That debate may have mischaracterized the object of its inquiry , and thus the Dworkinian solution to the difficulties of positivism is thus inapplicable. Even if that is wrong, it can be shown that the Dworkinian solution is not achievable and cannot rationally be approximated. Solutions to legal problems within the legal system as a whole (as compared to any particular node within the legal system) are arrived at through a process of inference to the best explanation that occurs within a highly interconnected set of nodes that has similarities to a neural network.
Rainhard Bengez, "On the Computable Structure of the Logocratic Method and Analyses Specific to Evidence Law"
My contribution aims at discussing the computational structure of Scott Brewer’s logocratic method specific to evidence law. It intends to contribute to this fundamental methodology by developing four interrelated goals. One is to provide a meta-logical and computable framework and foundation for the process of specification and the design of a domain specific language. The second is to employ this framework to introduce and study the bivalent structure of enthymemata (logical-syntactical structure and semantic or interpretative structure by assigning a certain measure). The third is to provide some algorithms and data structures for practical training software. This software concept and tool can be used in education or for assisting analysts. And, the last one is to use these models and concepts empirically to analyze the time-dependent structure of evidences and arguments used in law practice. This extends the notion of the logocratic method to the more general arena of mathematical modeling and epistemic logic where the aim is to build formal models that are constructed from specification with the aim to formulate algorithms and procedures for semi-autonomous systems (weak AI). For short, by formulating a computable structure for the logocratic method we discover a meta-structure, gain a deeper insight in our own practice of evaluating enthymemata; can investigate the time dependent structure of evidence law and can provide some algorithms for a software base which can be used for simulating and semi-autonomous reasoning.
Floris Bex (with Douglas Walton), "Combining Evidential and Legal Reasoning with Burdens and Standards of Proof"
In this paper, we provide a formal logical model of evidential reasoning with proof standards and burdens of proof that enables us to evaluate evidential reasoning by comparing stories on either side of a case. It is based on a hybrid inference model that combines argumentation and explanation, using inference to the best explanation as the central form of argument. The model applied to one civil case and two criminal cases. It is shown to have some striking implications for modeling and using traditional proof standards like preponderance of the evidence and beyond reasonable doubt.
Scott Brewer, "Abducing Abduction"
In this paper I offer a meta-abduction: an abduction whose explanandum is the process of abduction itself. Having identified what I regard as the general structure of abduction, I identify several types of abduction that are of special interest to legal argument, both in the field of evidence specifically and in areas of substantive law more generally. A central theme of my argument will be that legal arguments, like all arguments that are presented non-formally (arguments that are not presented in the formal language of logic or mathematics) are structurally enthymematic. Very often it is not clear what exactly are the premises and conclusion of a legal argument one finds in a judicial opinion, or brief, or law review article. Such arguments I refer to as practically enthymematic. Sometimes it is also unclear which of the four irreducible types of logical inference, namely, deduction, induction, abduction, and analogy, a particular argument token has. This is structural enthymemicity. Indeed, a good deal of important jurisprudential work has been done to discern what, in general, modes of logical inference are to be found in legal argument. Because legal arguments are structurally, as well as practically, enthymematic, the philosophical task of explaining the nature of legal argument is in large part the task of abductive inference. In this way, although all four modes of logical inference are found in legal argument, abductive inference is central to the explanation of legal argument. And in this way abduction is prima inter pares for – that is, central to -- the understanding of legal argument.

James Franklin, "How much of commonsense and legal reasoning is formalizable? A review"

After decades of experience with Artificial Intelligence, it is clear that commonsense and legal reasoning share a number of obstacles to formalization - obstacles not found in some other areas such as mathematics or pathology test interpretation. We review some of the main difficulties, including: the fuzziness or open texture of concepts, leading to borderline cases and problems of similarity (e.g. of cases to precedents); the problem of relativity to context with the difficulty of representing context; the subtleties of causation and counterfactuals; problems of probabilistic and default reasoning including reference class problems. Having surveyed those problems, we extract from them two higher-order issues that are responsible for much of the trouble: discreteness versus continuity (the mismatch between the discreteness of formal symbols and the continuous variation of commonsense concepts), and understanding (the need for genuine human understanding to make the first step in correct classification). It is concluded that full formalization of legal reasoning is unachievable, though there are prospects for systems that behave intelligently by harvesting the results of human understanding in the manner of Google.


David Hamer, "A probabilistic model of the relationship between the quantity (weight) of evidence, and its strength"

A recurrent problem for the probabilistic representation of proof in legal and other contexts is its purported failure to adequately reflect the quantity or weight of evidence. There are cases (such as the naked statistical evidence hypotheticals) where a slight body of evidence appears to provide a strong measure of support, probabilistically, however, a common intuition is that the evidence would lack sufficient weight to constitute legal proof. Further, if the probability measure has no relation to the weight of evidence, it would appear to express scepticism about the value of evidence. Why bother considering fresh evidence, increasing the weight of evidence, if it provides no epistemic benefit?

Probability theory can avoid the spectre of scepticism. There are probabilistic measures which can be expected to increase as the quantity of evidence increases – utility, certainty, and the probability score. In this paper I prove the relationship between quantity of evidence and these expected increases, and provide a computer model of the relationship. The model highlights some interesting features. As the weight of evidence increases, greater certainty can be expected. Equivalently, on numerous runs of the model, greater certainty is achieved on average. But in particular cases, certainty may remain the same or decrease. The average and expected increase will be more accentuated where more decisive evidence is available, but the result still holds for situations where the evidence is merely probative. Notwithstanding that certainty can be expected to increase, the probability measure expected from considering fresh evidence is, by definition, equal to the prior probability measure.

These results provide a response to the opponents of probabilistic measures of probability. Probability theory is not sceptical of the value of evidence, and does provide motivation for considering fresh evidence and the value of evidence. However, it makes no assumptions that the fresh evidence will necessarily put the fact-finder in a stronger epistemic position.


Bruce Hay, "Roughly Two Conceptions of the Trial"


Joseph Laronge, "Evaluating Universal Sufficiency of a Single Logical Form for Inference in Court"

Inference in court is subject to scrutiny for structural correctness (e.g., deductive or nonmonotonic validity) and probative weight in determinations such as probative relevancy and sufficiency of evidence. These determinations are made by judges or informally by jurors who typically have little, if any, training in formal or informal logical forms. This paper explores the effectiveness of a single intuitive categorical natural language logical form (i.e., Defeasible Class-Inclusion Transitivity, DCIT) for facilitating such determinations and its universal sufficiency for constructing any typical inferential network in court. This exploration includes a comparison of the functionality of hybrid branching tree-like argument frameworks with the homogenous linear path argument framework of DCIT. The practicality of customary dialectical argument semantics and conceptions of probative weight are also examined with alternatives proposed. Finally, the use of DCIT for depicting the reasoning of legal cases typically used in AI research is considered.


Michael Pardo, "Relevance, Sufficiency, and Defeasible Inferences: Comments on Modeling Legal Proof"
This paper discusses criteria for formals model of legal proof at two levels: the micro-level issue of the relevance of particular items of evidence, and the macro-level issue of the sufficiency of evidence as a whole to satisfy particular proof standards. At both levels, I examine criteria along two different dimensions. First, I explore a content-based distinction—whether the relationships between evidence and contested propositions ought to be modeled based on probabilistic or explanatory criteria. Second, I explore a structural distinction—whether the inferences being modeled ought to be modeled as defeasible or non-defeasible. I conclude that modeling legal proof based on defeasible, explanatory criteria provides a more plausible avenue than the alternatives, but also that there appear to be significant limitations on the utility of such models.


Federico Picinali, "Structuring inferential reasoning in criminal cases. An analogical approach"

The paper proposes a normative theory of inferential reasoning in criminal cases. The straightforward approach adopted in the work essentially consists in structuring factual inference and then exploring the dynamics of the structure as they are influenced by the requirements of evidence law, including, in particular, the standard of proof.

Factual inference is conceived of as having three components: a generalization, a probability statement attached to the generalization, and an analogy. The third component of this three-part structure is understood in terms of its classical meaning of “resemblance of relations”. Analogy performs a pivotal role: it puts the structure into “motion” by translating a general statement into a singular one.

While analogical reasoning has been widely studied in connection with legal reasoning and legal adjudication, little attention has been devoted to it by evidence law scholars. This paper aims to show that an analogy-based theory of inferential reasoning is a useful tool for investigating the characteristics of factual inference. In particular, the work claims that viewing juridical fact finding through the lenses of the proposed normative theory has the following three main merits.

First, the theory sketched in this paper makes it possible to incorporate in a single framework the important insights of different approaches to “reasoning under uncertainty” in a way that is consistent with the demands of the evidence law.

Second, the theory of inference presented here helps the assessment of some evidential problems that have been widely discussed by scholarship in recent years. By considering these problems in light of the tripartite inferential structure herein described, the paper attempts to clarify their nature and provides either tentative solutions or a solid foundation for further discussion.

Third, the proposed conceptualization allows for a functional taxonomy of reasonable doubts, a taxonomy that can automatically be derived from the tripartite inferential structure sketched in the paper.

The discussion here is limited to fact finding in criminal trials. Indeed, the peculiar standard of proof that the evidence law demands in this area necessarily informs any contextual normative theory of inference, influencing both its ends and its constituents.



Henry Prakken, "Can non-probabilistic models of legal evidential inference learn from probability theory?"

Recent miscarriages of justice in the Netherlands have led to increasing interest in Dutch legal practice in scientifically founded ways of thinking about evidence. In the resulting debate between academics and legal practitioners there is a tendency to conclude that the only scientifically sound way to perform legal evidential inference is in terms of probability theory. However, as is well known in our research communities, the languages of probability theory and the law are miles apart, which creates the danger that when courts attempt to model their evidential reasoning as probabilistic reasoning, the quality of their decisions will not increase but decrease.

For this reason many have proposed alternative models of evidential legal inference. Recently it has been claimed that AI accounts of argumentation and scenario construction are easier to apply in legal settings than probability theory. However, this raises the question to which extent such models violate the insights of probability theory. In this talk I will discuss this question by comparing alternative modellings of some examples.


D. Michael Risinger, "Against Symbolization—Some reflections on the limits of formal systems in the description of inferential reasoning and legal argumentation"

"There are few, if any, useful ideas in economics
that cannot be expressed in clear English."
John Kenneth Galbraith, The New Industrial State 419 (3rd ed., 1978)

When Jeremy Bentham wanted to summarize his felicific calculus, he wrote a mnemonic verse. When modern rational choice and expected utility theorists do the same, they write a formally symbolized expression. I want to suggest that Bentham’s instinct in this regard was superior. Formal symbolization, and its implication of an underlying mathematizability, has great and fecund power when something approaching defensible numerical values are or can be made available. This power is what justifies the loss in general availability that rendering things in specialized symbolic language entails. But when such values are not available, and are not likely to become available, then symbolization becomes an act of mystification with very little benefit and the potential for much mischief.



Boaz Sangero (with Mordechai Halpert), "Proposal to Reverse the View of a Confession: From Key Evidence Requiring Corroboration to Corroboration for Key Evidence"

Both case law and legal literature have recognized that all, and not just clearly statistical, evidence is probabilistic. Therefore, we have much to learn from the laws of probability with regard to the evaluation of evidence in a criminal trial. The present article focuses on the confession. First, we review legal and psychological literature and show that the probability of a false confession and, consequently, a wrongful conviction, is far from insignificant. In light of this, we warn against the cognitive illusion, stemming from the fallacy of the transposed conditional, which is liable to mislead the trier of fact in evaluating the weight of a confession. This illusion in evaluating the weight of a confession occurs when the trier of fact believes that, if there is only a low probability that an innocent person would falsely confess, then there is also only a low probability of innocence in each and every case where a person does confess guilt. The surprising truth is that even if there is only little doubt regarding the credibility of confessions in general, in some cases, this raises considerable doubt regarding the certainty of a conviction. We demonstrate this through the case of George Allen, who was convicted in 1983 of the rape and murder of Mary Bell. This is an example of a case in which the fallacy reaches extreme proportions, since nothing connected the accused to the crime, apart from his confession. Following this, we turn to a Bayesian calculation of probability for evaluating the weight of a confession. The probabilistic calculation that we perform dictates a new and surprising conclusion that calls for a significant reversal in how we view the confession: a confession should only be treated as corroboration of other solid evidence—if it exists—and not as key evidence for a conviction. Given the real danger of convicting innocents, we call on law enforcement officials to refrain from interrogating a person, with the aim of extracting a confession, when there is no well-established suspicion against this person, and even when the law allows for such an interrogation. Moreover, we call on legislatures to amend the law so that such an interrogation would not be possible, and to set forth that a confession is insufficient to constitute the sole, or key, evidence for a conviction, but it can be used only as corroboration for other key evidence—if it exists.



Giovanni Sartor (with Giuseppe Contissa) : "Evidence arguments in air traffic safety. A model for the law?"
Eurocontrol, the European Institution in charge with the control of Air Traffic requires a safety assessment with regard to both for an on-going operation (Unit Safety Case) and major changes to that operation (Project Safety Case). This assessment is based on the idea of a Legal Case, understood as the "presentation of Argument and Evidence that a overall claim is true". In such assessment safety must be demonstrated, with the support of relevant evidence, both with regard so Success Cases and with regard to Failure Cases. In this contribution we will consider whether this argument-based approach may be appropriate also for presenting legally relevant evidence (on safety and on its failures), and how safety arguments could be improved with argument models and structures developed within evidence and AI & law research. In particular, we shall address the prospects for the ALIAS project, which aims at developing a Legal-case, understood a method for providing evidence the operations or their changes are compliant with legal requirement, and provide a proper allocation of liabilities.

Peter Tillers, "A Rube Goldberg Approach to Factual Inference in Legal Settings"

There is no single logical or analytical process that characterizes effective human deliberation about ambiguous, incomplete, and inconclusive evidence about facts. Some or many of the early proponents of AI knew this: They knew or believed that intelligent creatures -- like those who possess mechanisms like our neural brains -- have a variety of distinct information processing mechanisms (e.g., sensory mechanisms, mechanisms for storing sensory data, etc.) that are clumped together and that, taken together, somehow manage to generate sweet inferential and epistemic music. The work that I did with David Schum led me to an analogous conclusion: Investigating factual hypotheses and drawing inferences about factual hypotheses involve a variety of distinct marshaling processes that, despite their distinctiveness (or, perhaps one might even say, incommensurability), work together in a way not specifiable by any kind of recipe or strict logic to produce elegant and frequently accurate factual inferences. However, it does not follow and it is not true that structured deliberation about evidence is pointless.



Bart Verheij, "Can the argumentative, narrative and statistical perspectives on legal evidence and proof be integrated?"
In the study of legal evidence and proof, three theoretical perspectives can be discerned: argumentative, narrative and statistical. In the argumentative perspective, it is considered how hypothetical facts are supported or attacked by the arguments as they can be based on the available evidence. A key theoretical issue in this perspective is how a bundle of related argumentative elements (e.g. in the form of a Wigmorean chart) determines which hypothetical facts are justified and which are not. The issue has led to an intricate web of theoretical and mathematical studies of what might be called argumentation logics. As a result of this, it has become a tricky question how (and which) argumentation logics are relevant for the analysis of arguments as they occur in practice.

In the narrative perspective, it is acknowledged that hypothetical facts never come alone, instead occur in a context of relevance that takes the form of a coherent story that fits the evidence. The narrative perspective emphasizes the holistic nature of the judgment of evidential data and their analysis. A key theoretical puzzle for the narrative perspective is how to avoid the dangers of the persuasive properties of stories – cf. the warning that good stories can push out true stories –, in particular by the explication of the connection of narrative considerations with the context of justification that counts when evaluating the evidence. Another issue is how alternative stories are to be constructed and compared, thereby preventing tunnel vision and allowing a well-balanced investigation and decision about the facts of a case.

The statistical perspective focuses on quantitative analyses, in particular on the basis of the careful collection of empirical evidence. The statistical perspective builds on well-established mathematical and methodological theory and practice, and can for instance explain how the conditional probabilities that connect evidence and hypotheses (as they arise from empirical investigation or expert judgment) change in the light of new evidence. A key issue for the statistical perspective, with both theoretical and practical connotations, is that normally only a fragment of the necessary quantitative input information is available. A second issue is how the descriptive focus of the statistical method is to be connected to the normative context in which evidential decision making takes place.

In this talk, the question is investigated whether and to what extent it is possible to develop an integrating theoretical perspective in which argumentative, narrative and statistical considerations about legal evidence and proof find a natural unification.


Douglas Walton (with Floris Bex), "Combining Evidential and Legal Reasoning with Burdens and Standards of Proof"
In this paper, we provide a formal logical model of evidential reasoning with proof standards and burdens of proof that enables us to evaluate evidential reasoning by comparing stories on either side of a case. It is based on a hybrid inference model that combines argumentation and explanation, using inference to the best explanation as the central form of argument. The model applied to one civil case and two criminal cases. It is shown to have some striking implications for modeling and using traditional proof standards like preponderance of the evidence and beyond reasonable doubt.


Workshop on AI & Evidential Inference: Main Page

Schedule of Talks