Abstracts and Slides
-
Wilfried Buchholz
On the Ordnungszahlen in Gentzen's First Consistency Proof
[Slides] [Handout]We will show how Gentzen's assignment of finite decimal fractions (Ordnungszahlen) to derivations—in his first (1936) consistency proof for arithmetic—by a slight modification can be transformed into a familiar looking assignment of set theoretic ordinals using standard ordinal arithmetic. This modified assignment is rather close to the assignment in Gentzen's second (1938) consistency proof, and can already be found in a widely unknown paper by L. M. Kogan-Bernstein (1981).
-
Andrea Cantini
About Truth, Explicit Mathematics and SetsIn 1987 Gerhard Jäger introduced elementary theories of explicit types and names, which since then have played a fundamental role for the development of Feferman's program of explicit mathematics, as well as for several proof-theoretical investigations. In this talk we deal with results, witnessing the interaction of notions of truth, explicit type and set.
-
Roy Dyckhoff
Intuitionistic Decision Procedures since Gentzen
[Slides]This being the 25th year since Hudelmaier's re-invention in 1988 of Vorob'ev's method (from 1950) of deciding provability in intuitionistic propositional logic, it seems appropriate to attempt a survey of the field, including recent advances.
-
Solomon Feferman
The Operational Perspective
[Slides]The operational perspective is embodied in a uniform applicative framework that is highly adaptable to a variety of foundational programs. As time permits, this talk will survey several of them, including the explicit mathematics program, the unfolding of open-ended schematic axiom systems, a logic for mathematical practice, and operational set theory for the functional formulation of small large cardinal axioms. All of these have led to a number of fine proof theoretical results and continue to raise interesting new problems.
-
Rajeev Goré
From Display Calculi to Decision Procedures via Deep Inference for Full Intuitionistic Linear Logic
[Slides]Full Intuitionistic Linear Logic (FILL) is multiplicative intuitionistic linear logic extended with par. Its proof theory has been notoriously difficult to get right, and existing sequent calculi all involve inference rules with complex annotations to guarantee soundness and cut-elimination. We give a simple and annotation-free display calculus for FILL which satisfies Belnap's generic cut-elimination theorem. To do so, our display calculus actually handles an extension of FILL, called Bi‑Intuitionistic Linear Logic (BiILL), with an 'exclusion' connective defined via an adjunction with par. We refine our display calculus for BiILL into a cut-free nested sequent calculus with deep inference in which the explicit structural rules of the display calculus become admissible. A separation property guarantees that proofs of FILL formulae in the deep inference calculus contain no trace of exclusion. Each such rule is sound for the semantics of FILL, thus our deep inference calculus and display calculus are conservative over FILL. The deep inference calculus also enjoys the subformula property and terminating backward proof search, which gives the NP-completeness of BiILL and FILL.
Joint work with Ranald Clouston, Jeremy Dawson and Alwen Tiu.
-
Per Martin-Löf
Sample Space–Event TimeThe notion of context in the sense of dependent type theory will be related to the notion of sample space underlying probability theory and statistics, and, correlatively, the dependency structure between the variables of a context will be related to the notion of causal dependency between events.
-
Pierluigi Minari
Transitivity Elimination: Where and Why
[Slides]"Analytic" equational proof systems for combinatory logic and lambda calculus, which admit the elimination of the transitivity rule, will be surveyed in this talk, together with the main applications of transitivity elimination.
-
Grigori Mints
Two Examples of Cut Elimination for Non-Classical Logics
[Slides]We consider syntactic cut elimination procedures for two non-classical propositional systems where unusual modifications of the standard Gentzen schema are needed.
Modal system GTD is obtained by adding the axiom \(\Box A \Longleftrightarrow \Box\Box A\) to the basic system K. To ensure complete cut elimination, both modal rules (in the antecedent and succedent) have to use both implications in the equivalence above.
The system S4C has the "next" operation \(\circ\) in addition to the standard S4-\(\Box\) modality. \(\circ\) commutes with all Boolean connectives, but only implication \(\circ\Box A \rightarrow \Box{\circ}A\) (not its converse) is postulated. In this case an essential transformation of the "endpiece" of a derivation is needed before an essential cut-reduction becomes applicable.
-
Wolfram Pohlers
From Subsystems of Classical Analysis to Subsystems of Set Theory
A personal account
[Slides] [Transcript]In this talk we want to give an overview why the emphasis of ordinal analysis shifted from subsystems of second order number theory to subsystems of set theory viewed from a very personal standpoint.
-
Michael Rathjen
When Kripke–Platek Set Theory Meets Powerset
[Slides]KP and extensions via axioms asserting the existence of many admissible sets have been studied intensively in admissible proof theory. Augmenting KP by the powerset operation either by adding the axiom or treating the operation as a primitive (Power KP) leads to theories that are to some extent amenable to proof-theoretic techniques. However, in several respects these theories and their models behave differently than their admissible cousins. The talk will be concerned with classical as well as semi-intituitionistic and intuitionistic versions of KP enriched by powerset axioms. I plan to address some questions that have latterly sprung up in this area.
-
Peter Schroeder-Heister
Proofs That, Proofs Why, and the Analysis of Paradoxes
[Slides]In an appendix to "Natural Deduction" (1965), Prawitz showed that the proof of Russell's paradox in naive set theory does not normalize. This implicitly suggests that non-normalizing proofs are no 'real' proofs, in contradistinction to normalizing proofs. This idea was turned into a general characteristic of paradoxes by Tennant in "Proof and Paradox" (1982). I pursue this idea further to an analysis of paradoxes in a typed framework in which, for the application of certain inference rules, notably modus ponens and cut, it is crucial that the terms involved normalize. Here proofs with normalizing proof terms are conceived as legitimate "proofs why", whereas proofs of propositions without normalizing proof-terms are "proofs that" which are not always justified.
-
Peter Schuster
Folding UpThe characteristic axiom of many an ideal object in algebra and logic (prime ideal or filter, dichotomy of order, complete theory etc.) can be put as a variant of the interpretation of disjunction in a model of classical logic: that is, "if \(A \lor B\) holds, then either \(A\) holds or \(B\) holds". Typically the characteristic axiom can be reduced to the special case that corresponds to disjunction interpretation where \(A = B\). (Unlike the latter, the former is not always trivial.) This reduction is usually done by invoking an appropriate form of the Axiom of Choice (Krull's Lemma, the Artin–Schreier Theorem, Gödel's Completeness Theorem etc.); whence it falls short of telling us why it works and why \(A = B\) matters. In spite of the transfinite odour around, we aim at the reduction by finite methods only, and expect to thus gain reasons on top of the sheer facts.
-
Helmut Schwichtenberg
Computational Content of Proofs Involving Coinduction
[Slides]We sketch a theory TCF of computable functionals, with free algebras as base types and simultaneous inductively/coinductively defined predicates, both in nested form and with parameters. The theory has recursion and corecursion operators and contains induction as well as coinduction. A soundness theorem w.r.t. (modified) realizability is proved for this setting. As an application we formalize work of Ulrich Berger (2011) dealing with base type (cototal, i.e., infinite) representations of real numbers and uniformly continuous functions. It turns out that efficient algorithms can be extracted from proofs (involving coinduction) of appropriate propositions in constructive analysis.
-
Anton Setzer
Pattern and Copattern Matching
[Slides]The induction principle for algebraic data types is in automated theorem proving tools often represented by termination checked pattern matching, i.e. possibly iterated case distinctions on the choice of constructors for an element of this data type. The dual of induction is coinduction, which is an introduction rule. It can be represented by its dual copattern matching, which means by determining the result of applying the destructors to this element.
We will introduce the theory of nested pattern and copattern matching (joint work with Andreas Abel, Brigitte Pientka and David Thibodeau). We will discuss how nested pattern and copattern matching can be reduced, at least in case it should pass a strict termination checker, to primitive induction and coinduction.
We will present as well a little theorem, which restricts what can be achieved in intensional type theory regarding decidable type checking. Even assuming that streams are equal if their heads and tails are equal forces an equality on streams to be undecidable.
-
Stan Wainer
A Miniaturized Predicativity
[Slides]The basis of this work is Leivant's (1995) theory of ramified induction over N, which has elementary recursive strength. It has been redeveloped and extended in various ways by many people. Eg. Spoors & W. (2012) built a hierarchy of ramified theories whose strengths correspond to the levels of the Grzegorczyk hierarchy. Here, a further extension of this hierarchy is developed, in terms of a predicatively generated infinitary calculus with transfinitely-many stratified levels of numerical inputs. However this theory is quite weak, because the ordinal assignment is "slow growing" and the levels are generated autonomously according to a correspondingly weak (though natural) notion of transfinite induction. It turns out that the provably computable functions are now those elementary recursive in the Ackermann function. All this is closely related to recent works of Jäger & Probst and Ranzi & Strahm on iterated stratified induction, but whereas their theories have full, complete induction as basis, ours do not.