August 9-10, 2022
The workshop will be held in presence on the campus of the University of Regensburg:
Vielberth-Gebäude, room H26.
Because of COVID-19 restrictions, we offer the possibility for online participation.
Please register (for free) here: https://forms.gle/B1iEiA8JGK8Y5v8M6
The Zoomlink for online participation will be sent to registered participants shortly before the workshop starts.
The aim of this interdisciplinary workshop is to bring together philosophers, probability theorists, logicians, artificial intelligence researchers, and psychologists to discuss selected problems in the broad domains of reasoning and uncertainty, including inferences about conditionals, inconsistency, and coherence, how to argue and make decisions rationally, and how to represent and manage uncertainty and incomplete knowledge. We also aim to discuss philosophical foundations and methodological questions concerning reasoning research. For example, whether and how combining conceptual, formal (like (non-classical) logic, (coherence-based) probability logic, artificial intelligence models), and empirical research methods may cross-fertilise reasoning research on normative and descriptive dimensions.
Tuesday, August 9th, 2022 | |
9:45 | Welcome |
10:00 - 10:45 | Sabine Frittella: Non-standard probabilities and belief functions over Belnap Dunn logic |
10:45 - 11:30 | Chris Fermüller: Re-visiting Giles's game |
11:30 - 12:00 | Coffee break |
12:00 - 12:45 | Andrea Capotorti: The role of coherent probabilities... nowadays |
12:45 - 14:30 | Lunch break |
14:30 - 15:15 | Anthony Hunter: Introduction to probabilistic approaches to modelling argumentation |
15:15 - 16:00 | Bart Verheij: Arguments, scenarios and probabilities as tools for reasoning and uncertainty |
16:00 - 16:30 | Coffee break |
16:30 - 17:15 | Hans Rott: Evidential support in conditionals - qualitiative and probabilistic approaches |
18:15 | Regensburg guided tour and dinner |
Wednesday, August 10th, 2022 | |
10:00 - 10:45 | Angelo Gilio: On coherence and conditionals |
10:45 - 11:30 | Giuseppe Sanfilippo: Trivalent logics, compound and iterated conditionals, and conditional random quantities |
11:30 - 12:00 | Coffee break |
12:00 - 12:45 | Niki Pfeifer & Leon Schöppl: Connexive logic: X-Phi results and coherence-based probability semantics |
12:45 - 14:30 | Lunch break |
14:30 - 15:15 | Nicole Cruz: Measuring coherence in reasoning |
15:15 - 16:00 | David Over: Independence conditionals and inferentialism |
16:00 - 16:30 | Coffee break |
16:30 - 17:15 | Gianluigi Oliveri: On knowledge and uncertainty in contemporary empirical science |
17:15 - 18:00 | Niki Pfeifer & Romina Schmid: Early experimental research on deductive reasoning |
19:00 | Dinner |
Andrea Capotorti (University of Perugia, Italy)
Nicole Cruz (University of Innsbruck, Austria)
Chris Fermüller (TU Wien, Austria)
Sabine Frittella (INSA Centre Val de Loire, France)
Angelo Gilio (Sapienza University of Rome, Italy)
Anthony Hunter (University College London, UK)
Gianluigi Oliveri (University of Palermo, Italy)
David Over (Durham University, UK)
Romina Schmid (University of Regensburg, Germany)
Leon Schöppl (University of Regensburg, Germany)
Bart Verheij (University of Groningen, Netherlands)
Organizers:
Niki Pfeifer (University of Regensburg, Germany)
Hans Rott (University of Regensburg, Germany)
Giuseppe Sanfilippo (University of Palermo, Italy)
The role of coherent probabilities... nowadays
Precisely because we are living in the era of “Big-data”, “SMART cities”, “IoT”, etc., the coherence principle - for both unconditional and conditional probability assessments - can have a crucial role. In fact, since the huge amount and heterogeneity of data, complete models are just a “chimera” so suitable tailored and dynamically modifiable models are even more a compulsory task. Furthermore, the most significant situations present interconnections between very relevant events. They are absolute to be taken into account playing a crucial role in the merging of different sources of (probabilistic) information. Last, but not least, the unexpectedness of scenarios has shown recently (pandemia, international crisis, etc.) all of its practical relevance. In the talk, these aspects will be stressed and connected with the peculiarities of coherent assessments, profiting from different contributions on these subjects presented in the recent past.
The slides for this talk can be found here.
Measuring coherence in reasoning
Consider the inference sequence “The glass had orange juice, therefore it had orange juice or tequila, therefore if it did not have orange juice then it had tequila”. How convincing is it? To draw inferences like this, people may consider the meanings of the statements involved (how is “orange juice or tequila” to be interpreted in this context?), their degrees of belief that each statement is true (do we know for certain that the glass had orange juice?), and any logical relations between the statements (e.g. does one statement entail or preclude another?). In reasoning research, these three pieces of information have often been treated as independent and potentially conflicting – it is often seen as rational to take into account logical relations, and as biased to take into account content and beliefs. But theoretically such a conflict is not necessary, and empirically it does not seem plausible. In the Bayesian approach to reasoning described here, the three pieces of information are integrated and jointly necessary to draw good inferences. This approach is based on the concept of coherence. Degrees of belief in statements are coherent iff they follow the principles of probability theory (e.g. the probability that the glass has orange juice cannot be higher than the probability that it has orange juice or tequila). But measuring the coherence of people’s uncertain reasoning is not straightforward, especially in situations in which the information available is uncertain, incomplete and changeable. To make such measurements, we must account for how logical constraints between probabilities shift when new information becomes available; define and adjust for the probability of making a coherent response just by chance; and ascertain which patterns of statement probabilities would allow us to make plausibly falsifiable, and thus informative, assessments of sensitivity to coherence. I describe some of these challenges, and discuss possible ways of addressing them in the quest to increase our understanding of reasoning under uncertainty.
The slides for this talk can be found here.
Re-visiting Giles's Game
Already in the 1970s Robin Giles suggested to model reasoning about physical experiments by a combination of a logical dialogue game and a betting scheme on the results of elementary (yes/no) experiments. Later, Giles claimed that his game also provides a semantic foundation for fuzzy logic. Formally, Giles's game characterizes Lukasiewicz logic. More recent work demonstrates that variants and extensions Giles's game can be employed to characterize also other fuzzy logics as well as a large family of fuzzy quantifiers. Giles's ideas are still relevant for modelling reasoning under vagueness and uncertainty in general and hence deserve to be better known. We will present the game without assuming any specific knowledge about fuzzy logics or the philosophy of physics. The apparent contradiction between the truth-functionality of the characterized logics and their grounding in (non-truth-functional) probabilities will be addressed. Moreover, we provide a quick overview over the host of more recent results that have grown out from a careful analysis of Giles's game.
The slides for this talk can be found here.
Non-standard probabilities and belief functions over Belnap Dunn logic
Belnap Dunn logic is a four-valued logic introduced in order to reason with incomplete and/or inconsistent information. It relies on the idea that pieces of evidence supporting a statement and its negation can be independent. Non-standard probabilities were proposed in to generalize the notion of probabilities over formulas of Belnap Dunn logic. Here, we continue this line of research and study the implications of using mass functions, belief functions and plausibility functions to formalize reasoning with incomplete/contradictory evidence within the framework of Belnap Dunn logic.
The slides for this talk can be found here.
On Coherence and Conditionals
We illustrate basic notions on coherence and conditionals. We first recall the three levels of knowledge on events, described in a 1980 paper by de Finetti, by giving a hint on the extension to conditional events. Then, we deepen some basic aspects on coherence, by describing the equivalence among different schemes when making conditional probability assessments. In particular, by exploiting a suitable extended notion of a conditional random quantity X|H, we show the equivalence between conditional bets and bets on conditionals. We also illustrate the equivalence between the conditions of coherence based on random gains and the geometrical conditions based on convex hulls. Based on the geometrical approach, we show that our notion of conjunction of two conditional events can be represented as a conditional random quantity in different equivalent ways. We then briefly illustrate some intuitively valid probabilistic assertions on complex sentences which we formalize by iterated conditionals; moreover, we give a look at some basic logical and probabilistic properties, valid for unconditional events, which are all preserved in our approach, while they are not satisfied in general in the setting of trivalent logics.
The slides for this talk can be found here.
Introduction to Probabilistic Approaches to Modelling Argumentation
Computational models of argument aim to capture aspects of the human ability to reason with complex information (including incomplete, inconsistent, and uncertain), and with different perspectives, for making sense of the world, for making decisions, and for persuasion. Argumentation can be monological (involving a single agent) or dialogical (involving multiple agents in a discussion, debate, etc). In this talk, we will look at some features of computational models of argument based on graph theory and logic, and then consider how we capture aspects of uncertainty in these models by drawing on probability theory. In particular, we will look at two approaches called the constellations approach and the epistemic approach to probabilistic argumentation, and then look at a new proposal called epistemic graphs for modelling uncertainty involving both attacking and supporting arguments.
The slides for this talk can be found here.
On Knowledge and Uncertainty in Contemporary Empirical Science
If X is a belief, and by 'X is certain' we mean that there is no doubt as to whether X is true or false, we must conclude that uncertainty plays a major role in contemporary empirical science. But, on the face of it, this is incompatible with the hard-won status of knowledge producing activity commonly attributed to such an important and celebrated part of human intellectual endeavour. Indeed, if contemporary empirical science were not a knowledge producing activity, how could we account for its extraordinary heuristic, explanatory, predictive, demonstrative power, and for the unprecedented possibilities of controlling the environment made available by science through the use of technology? We claim that the conflict we seem to have between the status of knowledge producing activity enjoyed by contemporary empirical science and the role played in it by uncertainty, is only apparent. For it depends upon outdated conceptions of knowledge.
Independence conditionals and inferentialism
There has been increasing confirmation, supporting Bayesian approaches in the psychology of reasoning, of the conditional probability hypothesis that the probability of the natural language conditional, P(if p then q), is the conditional probability of q given p, P(q|p). Some studies focusing on conditionals with causal content have found possible exceptions to the hypothesis when p does not increase the probability of q, e.g., when p and q are independent. Other studies have not supported this conclusion. But the former studies have encouraged the development of truth condition inferentialism, which claims that there must be a compelling argument from p to q for "if p then q" to be true. Everyone agrees that there are some examples of true "if p then q" in which p and q are independent. Consider: “If your children get the MMR vaccine, they will not develop autism.” Inferentialists have tried to dismiss such uses as “non-standard”. It is, however, circular to argue that a theory only applies to "standard" cases, and that the "standard" cases are the ones the theory applies to. A theory that takes this line is untestable. Conditionals "if p then q", in which p is independent of q, could be called independence conditionals. Explicit and implicit uses of independence conditionals are common and perfectly "standard" in any reasonable sense of the word. Special points have to be made about them when they are used in the conditional inferences usually studied in the psychology of reasoning, modus ponens, modus tollens, affirmation of the consequent, denial of the antecedent, but independence conditionals have an important role to play in human reasoning, particularly in causal reasoning.
The slides and references for this talk can be found here and here.
Evidential support in conditionals - qualitiative and probabilistic approaches
In this talk, I will contrast two recent views on how the antecedent of a natural-language conditional may be interpreted as providing evidential support for the conditional's consequent. I look at qualitative and probabilistic implementations of the two ideas.
Trivalent logics, compound and iterated conditionals, and conditional random quantities
The problem of how to assign degree of beliefs to conjunctions or disjunctions of conditionals, or conditionals with conditionals in their antecedents or consequents, has been largely studied in literature. As an example of a conjoined conditional consider two soccer matches. For each (uncancelled) match the possible outcomes are: home win, draw, and away win.
Then, the conjunction sentence
"The outcome of the 1st match is home win (if the 1st match is uncancelled)
and
the outcome of the 2nd is draw (if the 2nd match is uncancelled)"
is a conjoined conditional, because each conjunct is itself a conditional.
Usually, a conditional event is looked at as a three-valued object (with possible values true, false, void) and compound and iterated conditionals have been defined in
trivalent logics. We verify that none of these logics satisfies all the basic logical and probabilistic properties valid for unconditional events. Then, we consider our approach to compound and iterated conditionals in the setting of coherence as suitable conditional random quantities. We verify that in our approach all the basic logical and probabilistic properties are preserved. We illustrate possible applications of compound and iterated conditionals to the psychology of uncertain reasoning, to connexive logic and to non-monotonic reasoning in artificial intelligence.
Early experimental research on deductive reasoning
The philosophers and psychologists Gustav Wilhelm Störring (1860 – 1946) and Johannes Lindworsky (1875 - 1939) were pioneers in the experimental psychology of deductive reasoning. One observation, which for example Störring made, was that his participants drew conclusions from the given premises with the help of a process of insertion. Störring’s and Lindworsky’s work has not been paid a lot of attention to. The aim of our talk, which is based on work-in-progress, is two-fold: (1) We want to raise more awareness of their important groundwork and (2) and aim to trace the work of precursors and their role for later developments in the psychology of reasoning. After a brief overview of Störring’s and Lindworsky’s lives and contributions in philosophy and psychology, we will focus on their experiments on deductive reasoning. In particular, we will discuss Störring’s (1908) and (1909) papers, as well as Lindworsky’s dissertation, which was published in the first experimental-psychological book on deductive reasoning in 1916. The latter was also inspired by Störring’s work. We will illustrate the pioneering experiments with task materials on simple argument forms including syllogistic, spatial, and temporal inferences. Moreover, we will provide insight into Störring’s and Lindworsky’s connections and positions within the scientific community of the time. We will discuss their positioning towards psychologism, because both of them worked at the intersection of psychology and philosophy at a time when the psychologism debate was at its peak. They were also both significantly influenced by members of the Würzburg School. Their importance within the history of psychology will be assessed by the contributions they made, which can be seen as precursors to later developments in psychology (e.g., meta-cognitive concepts like the feeling of rightness, mental models, or ideas from embodied cognition). Our contribution aims to shed light on the almost forgotten early history of the experimental psychology of deductive reasoning.
Connexive logic: X-Phi results and coherence-based probability semantics
While classical logic (CL) has long been used in an attempt to classify instances of human reasoning as (irr-)rational, it was undermined by both armchair intuitions and experimental results concerning naive reasoning: certain propositional formulae — like Aristotle’s (¬ (A → ¬ A)) and Boethius' theses ((A → B) → ¬ (A → ¬ B)) — are non-theorems of CL, but judged valid by the majority of human reasoners. Hence, a variety of connexive logics has been constructed which conform in their validation of formulae with the intuitions described above. We present the results of two experiments (total n=72) investigating how participants judge a variety of selected propositional formulae, among them the most important connexive principles. Our data further strengthen the case of connexive logics as better logical frameworks of human reasoning than CL. Moreover, we experimentally investigated two approaches for validating connexive principles in coherence-based probability logic (see Pfeifer and Sanfilippo 2021). Overall, we observed good agreement between their predictions and the data, but especially for Approach 2.
The slides for this talk can be found here.
Arguments, scenarios and probabilities as tools for reasoning and uncertainty
In the theory on the rational handling of the evidence in crime cases, three tools are distinguished: arguments, scenarios and probabilities. Arguments can be used to analyse how the evidence supports and attacks the possible events of a crime. Scenarios can be used to analyse various sequences of events that may explain how a crime occurred. Probabilities can be used to analyse degrees of uncertainty, and how they are updated in light of the evidence. In the talk, I present an update on recent research on arguments, scenarios and probabilities (and their combinations) as performed in Groningen, focusing on their role for reasoning and uncertainty. The work has led to research on connections between knowledge, data and reasoning, a key puzzle in current artificial intelligence.
The slides for this talk can be found here.
This workshop is sponsored by the BMBF research project "Logische und
wissenschaftstheoretische Grundlagen des Schließens unter
Unsicherheit" (Logic and philosophy of science of reasoning under
uncertainty)
Organisers: Niki Pfeifer (University of Regensburg), Hans Rott (University of Regensburg), and Giuseppe Sanfilippo (University of Palermo)