Logic seminar talks
Friday, December 2, 2022, in S2|15-201 at 13:30
Alberto Marcone (University of Udine)
Some (extra)ordinary equivalences in reverse mathematics
In 1980 Rival and Sands proved the following theorem: (RSpo) Every infinite partial order P of finite width contains an infinite chain C such that every element of P is either comparable with no element of C or with infinitely many elements of C. We study this statement and its variants from the viewpoint of reverse mathematics. It turns out that various forms of (RSpo) are equivalent to different forms of the principle ADS that had been studied in reverse mathematics as a consequence of Ramsey's Theorem for pairs: ADS states that every infinite linear order has either an infinite ascending chain or an infinite descending chain. The equivalences we obtain are extraordinary because previously no theorem of ordinary mathematics was known to be equivalent to principles such as ADS which form the so-called zoo of reverse mathematics: a large collection of statements that are inequivalent to each other and to the classic big five of reverse mathematics. To obtain our results we produce a proof of (RSpo) which is completely different from the one given by Rival and Sands; we also strengthen (RSpo) by replacing finite width with no infinite antichains in the hypothesis.
Joint work with Marta Fiori Carones, Giovanni Solda and Paul Shafer.
Tuesday, July 12, 2022, in S2|15-201 at 14:00
Sarah Kleest-Meißner (HU Berlin)
Discovering Event Queries from Traces using Subsequence-Queries with Wildcards and Gap-Size Constraints
We will discuss a new tool for querying event traces: subsequence-queries with wildcards and gap-size constraints (swg-queries, for short). These are an adaption of Angluin-style Pattern Languages, consisting of a string s over an alphabet of variables and types, a global window size w, and a tuple c of local gap-size constraints. A query q matches in a trace t (i. e., a sequence of types) if the variables can uniformly be substituted by types such that the resulting string occurs in t as a subsequence that satisfies the global window size and the local gap-size constraints.
We formalise and investigate the task of discovering an swg-query that describes best the traces from a given sample S of traces and we present an algorithm solving this task. This algorithm is an extension of Shinohara Algorithm from the field of inductive inference. As a central component, our algorithm repeatedly solves the matching problem (i. e., deciding whether a given query q matches in a given trace t), which is an NP-complete problem (in combined complexity). Hence, the matching problem is of special interest: We discuss tractable subclasses, which lead to tractable subclasses of the discovery problem as well.
Thursday, June 23, 2022, in S2|15-201 at 14:00
Manuel Bodirsky (TU Dresden)
On the intersection of Datalog and Guarded Second-Order Logic
Datalog and guarded second-order logic (GSO) are important formalisms with incomparable expressive power. Bourhis, Kroetsch, and Rudolph introduced *nested guarded queries*, which form a fragment of both Datalog and GSO. I will present an example of a class of finite structures which can be expressed both in GSO and in Datalog, but which cannot be expressed with nested guarded queries. I also show that every class of finite structures that can be expressed in GSO and whose complement is closed under homomorphisms is a finite union of constraint satisfaction problems (CSPs) of countably categorical structures. As a consequence, for every GSO sentence and all l and k there exists a *canonical* Datalog program of width (l,k) in the sense of Feder and Vardi, and the intersection of GSO and Datalog, and of fragments of Datalog, can be characterised via existential pebble games. Pebble game characterisations are useful for proving non-containment results as the one mentioned above.
Joint work with Simon Knäuer and Sebastian Rudolph.
Friday, June 10, 2022, in S2|15-201 at 13:30
Moritz Lichter (TU Darmstadt)
Separating Rank Logic from Polynomial Time
The quest for a logic capturing polynomial time is the question whether there exists a logic that exactly defines all properties decidable in polynomial time. The most promising candidates for such a logic are Choiceless Polynomial Time (CPT) and rank logic. Rank logic extends fixed-point logic by a rank operator over prime fields.
In this talk, I argue that rank logic cannot define the isomorphism problem of a variant of the so called CFI graphs. This isomorphism problem is decidable in polynomial time and actually definable in CPT. Thus, rank logic is separated from CPT and in particular from polynomial time.
Friday, May 20, 2022, in S2|15-201 at 13:30
Giovanni Solda (Ghent University)
Multifunctions computing sequentially discontinuous multifunctions
Given two partial multifunctions f and g, we say that f is Weihrauch reducible to g if, roughly speaking, there is a computable procedure H that allows us to find an element of f(x) using g and x as oracles, for every element x in the domain of f. The Weihrauch degree of a multifunction f is thus meant to encode the “uniforms strength” of f.
In this talk, we will focus on a variant of the reducibility described above, where we just require the procedure H to be continuous: this gives rise to the continuous Weihrauch degrees. In this setting, we will focus on the degrees of partial multifunctions that are discontinuous but have a rather simple domain. We will see that these degrees are strongly related to ACC_N, a very weak choice problem on the natural numbers. In order to do this, we will briefly discuss how to formulate continuity of multifunctions in terms of Wadge games.
This is joint work with Arno Pauly.
Wednesday, 16 March 2022, at 16:15
Patrick Uftring (TU Darmstadt)
Weak and strong versions of effective transfinite recursion
Working in the context of reverse mathematics, we give a fine-grained
characterization result on the strength of two possible definitions for
effective transfinite recursion used in literature. Moreover, we show
that Pi^0_2-induction along a well-order X is equivalent to the statement
that the exponentiation of any well-order to the power of X is
well-founded.
Leeds-Ghent Virtual Logic Seminar
http://www1.maths.leeds.ac.uk/~matpsh/virtualseminar.html
Friday, 25 February 2022, at 16:00
Davide Manca (TU Darmstadt)
Pi^1_1-CA_0 proves Fraïssé's conjecture
The research program known as Reverse Mathematics aims to determine the minimal axioms necessary to prove a given result. To do that, one works in second order arithmetic, and attempts to show that the result is equivalent to some appropriate subsystem of axioms, over some weaker base theory. Among those subsystems, there are five which play a prominent role: they are commonly called the Big Five. An important open problem in the field is determining the reverse mathematical strength of Fraïssé's conjecture. That statement is called a conjecture for historical reasons, but it is in fact a theorem and it states that countable linear orders form a well-quasi order with respect to embeddability. It is known that Fraïssé's conjecture can prove (over a weak base theory) the axioms of the second strongest subsystem among the Big Five, called ATR_0. In turn, S.Simpson, A. Marcone et al. conjectured that the reverse also holds. A substantial step forward was made in 2017, when A. Montalbán showed that Fraïssé's conjecture can be proved in Pi^1_1-CA_0, the strongest subsystem among the Big Five. In this talk, we present that result.
- – - – - -
Format: Hybrid, i.e., both in person and online
Room: Logic Library (S2|15-201)
Zoom data:
(Meeting-ID: 895 6206 0515, Passcode: 825097)
- – - – - -
We would like to apologize that external participants can attend via Zoom only (this concerns participants who are not students or employees of TU Darmstadt). If the library should become crowded, we might also need to ask some internal participants to attend via Zoom from their office.
Despite these provisos, we would like to encourage attendance in person: Davide is a new PhD candidate in our group, so we think it's a nice opportunity to meet!
If there are questions or comments, please .
Friday, November 19, 2021, in S2|15-244 at 15:00
Prof. Vassilis Gregoriades (National Technical University of Athens)
A characterization of Pi^0_3-completeness.
Given 0 < a < q the intersection of the sequential spaces l^p, for p > a, is a Pi^0_3 subset of l^q. Nestoridis asked if the latter intersection could lie lower in the Borel hierarchy. We answer this in the negative by proving that this intersection is a i^0_3-complete set. The proof of this result motivates a characterization of i^0_3-completeness.
Wednesday, 2 December 2020, 18:00
Proof Theory Virtual Seminar:
Ulrich Kohlenbach (TU Darmstadt)
Proof Mining and the “Lion-Man” game
Abstract: see https://www.proofsociety.org/proof-theory-seminar
Zoom link (no password; starts working 15 minutes before the talk)
- – - – -
If you would like to receive reminders about all talks in the future, you
can subscribe to the following google group (no google account required):
https://groups.google.com/forum/#!forum/proof-theory-seminar/join
C A N C E L L E D : March 25, 2020
Prof. Dag Normann, University of Oslo
The Skolem paradox and higher order computability models
The non-paradoxical fact that there is a countable elementary substructure of the continuum is sometimes referred to as “Skolem's paradox”. This non-paradox gives us much freedom in constructing higher-order models where the independence of classical mathematical theorems from second order arithmetics can be shown.
In this talk we will see how methods from higher order computability theory can be applied to construct a plethora of models giving examples of theorems, simple from the point of view of undergraduate students, that cannot be proved in full second order number theory in conjunction with strong recursion principles like those in Gödel's T.
Friday, February 21, 2020, in S2|15-201, at 13:30
Prof. Keita Yokoyama, Japan Advanced Institute of Science and Technology
Kripke models of HA and separations of non-constructive principles
In [1], Akama/Berardi/Hayashi/Kohlenbach gave separations of non-constructive principles such as LPO LLPO over Heyting arithmetic by using several different methods. In this study, we will try to find a uniform technique with Kripke models to separate those principles. For this, we sharpen Smorynski's construction for the refinement of de Jongh's theorem in [2].
This is a joint work with Makoto Fujiwara, Hajime Ishihara, Takako Nemoto and Nobu-Yuki Suzuki.
[1] Yoji Akama, Stefano Berardi, Susumu Hayashi and Ulrich Kohlenbach, An arithmetical hierarchy of the law of excluded middle and related principles. In: Proc. of the 19th Annual IEEE Symposium on Logic in Computer Science (LICS?04), pp. 192-201, IEEE Press (2004).
[2] Craig A. Smorynski, Applications of Kripke models, Metamathematical investigation of intuitionistic arithmetic and analysis, pp. 324?391, Lecture Notes in Mathematics, Vol. 344, Springer, Berlin, 1973.
Friday, February 14, 2020, in S2|15-201, at 13:30
Prof. Kirstin Peters, TU Darmstadt
Session Types for Fault-Tolerant Distributed Algorithms
We strive to use session type technology to prove behavioural properties of fault-tolerant distributed algorithms. Session types are designed to abstractly capture the structure of (even multiparty) communication protocols. The goal of session types is the analysis and verification of the protocols? behavioural properties. One important such property is progress, i.e., the absence of (unintended) deadlock. Distributed algorithms often resemble (compositions of) multiparty communication protocols. In contrast to protocols that are typically studied with session types, they are often designed to cope with faults, like system failures. An essential behavioural property is (successful) fault-tolerant termination, but it is often elaborate to prove for distributed algorithms. We extend multiparty session types to capture faults due to message loss and process crashes. We implement both reliable and unreliable communication and also an unreliable subsession mechanism in types and augment the semantics of processes by failure patterns that can be used to represent system requirements (as, e.g., failure detectors).
Friday, January 31, 2020, in S2|15-201, at 15:30
Prof. Peter Schuster, University of Verona
Resolving Finite Indeterminacy. A Definitive Constructive Universal Prime Ideal Theorem
Joint work with Daniel Wessel
Dynamical proofs were designed to eliminate the ideal objects abstract algebra abounds with. Typically those ideal objects are granted by an incarnation of Zorn's Lemma and serve for proving the semantic conservation of additional non-deterministic sequents, that is, with finite but not necessarily singleton succedents. This was possible also because (finitary) coherent or geometric logic predominates in that area: the use of a non-deterministic axiom can be captured by a finite branching of the proof tree. Incidentally, a paradigmatic case has widely been neglected in dynamical algebra: Krull's Lemma for prime ideals. Digging deeper just about that case, which has been settled only recently, we manage to unearth the underlying general phenomenon: Given a concrete claim of computational nature that usually is proved by the semantic conservation of certain additional non-deterministic axioms, there is a finite labelled tree belonging to a suitable inductively generated class which tree encodes the desired computation. Our characterisation works in the the fairly universal setting of a consequence relation enriched with non-deterministic axioms; uniformises most if not all of the known instances of the dynamical method; generalises the proof-theoretic conservation criterion that has been offered before; and last but not least links the syntactical with the semantic approach: every ideal object used for the traditional proof of a concrete claim is the shadow one of the corresponding tree's branches.
Friday, November 15, 2019, in S2|15-201, at 13:30
Dr. Bruno Dinis, University of Lisbon
Gotta catch 'em all: a parametrised functional interpretation
Since Godel published his functional (“Dialectica”) interpretation in 1958, various other functional interpretations have been proposed. These include Kreisel's modified realizability, the Diller-Nahm variant of the Dialectica interpretation, Stein's family of interpretations, and more recently, the bounded functional interpretation, the bounded modified realizability, and “Herbrandized” versions of modified realizability and the Dialectica. It is then natural to ask how are these different interpretations related to each other and what is the common structure behind all of them.
These questions were addressed by Paulo Oliva and various co-authors in a “unification programme”. Starting with a unification of interpretations of intuitionistic logic, which was followed by various analysis of functional interpretations within the finer setting of linear logic, a proposal on how functional interpretations could actually be combined in the so-called hybrid functional interpretations, and the inclusion of truth variants in the unification. This unification programme has so far been unable to capture the two more recent families of functional interpretations, namely the bounded functional interpretations, and the Herbrandized functional interpretations.
In this talk I will present a more general framework for unifying functional interpretations, based on two families of parameters, one for interpreting the contraction axiom, and another for interpreting typed quantifications, hich allow to include in the unification the more recent bounded and Herbrandized functional interpretations. I will start by presenting this parametrised interpretation in the setting of affine logic. Then, via the two well-known Girard translations from intuitionistic logic into affine logic, one obtains two parametrised interpretations of intuitionistic logic. I will explain how all of the functional interpretations mentioned above can be recovered by suitable choices of these two parameters and how this framework can be used to discover some new interpretations. I will finish with some open questions.
(This is joint work with Paulo Oliva)
Friday, October 25, 2019, in S2|15-201, at 13:30
Dr. Anton Freund, TU Darmstadt
Reflection and induction in set theory
Equivalences between reflection and induction principles are well-known from the context of first and second order arithmetic. In this talk I present an analogous result in set theory: The principle that any true first order formula with real parameters reflects into some transitive set model is equivalent to induction along $\Delta(\R)$-definable well-founded classes, over a suitable weak base theory (see arXiv:1909.00677 for full details).
Tuesday, October 8, 2019, in S2|15-201, at 16:15
Prof. Tatsuji Kawai, Japan Advanced Institute of Science and Technology
The domain of a definable function of intuitionistic arithmetic in all finite types
Brouwer (1927) claimed that every function from the Baire space to the set of natural numbers is induced by a neighbourhood function (or associate) whose domain admits so called bar induction. We show that Brouwer's claim holds for any definable function of the intuitionistic arithmetic in all finite types (HA omega). Our proof does not rely on proof theoretic methods such as normalisation or ordinal analysis. Instead, we internalise in HA omega the dialogue-tree interpretation of Gödel's system T due to Martin Escardo (2013). The interpretation determines a syntactic translation on the terms of HA omega, which yields a neighbourhood function from a closed term of HA omega with the required property.
As applications of this result, we revisit some well-known properties of HA omega: uniform continuity of definable functions the Cantor space; closure under the rule of bar induction; and closure of bar recursion for the lowest type with a definable stopping function.
Tuesday, September 17, 2019, in S2|15-244, at 15:00
Dr. Makoto Fujiwara, Institute of Science and Technology, Meiji University
Weihrauch and constructive reducibility between existence statements
We formalize the primitive recursive variants of Weihrauch reduction between existence statements in finite-type arithmetic and show a meta-theorem stating that the primitive recursive Weihrauch reducibility verifiably in a classical finite-type arithmetic is identical with the formal reducibility in the corresponding (nearly) intuitionistic finite-type arithmetic for all existence statements formalized with existential free formulas.
In addition, we demonstrate that our meta-theorem is applicable in some concrete examples from classical and constructive reverse mathematics.
Friday, August 23, 2019, in S2|15-201, at 13:30
Anantha Padmanabha, Institute of Mathematical Sciences, Chennai
Term Modal Logic: Satisfiability problem
Multi-modal logics are ubiquitously used in various fields of theoretical computer science.
Typically, we have formulas of the form □iα where i comes from a fixed finite set of agents
or indices. In dynamic settings like client-server systems; multi-thread processes; large games
etc, the number of agents cannot be fixed a priori, and may even change from state to state.
We are thus led to Term Modal Logic (TML) where we use variables ranging over agent names
and quantification of such variables; the logic has formulas such as ∀x(P(x) → □xα). Thus
TML is a first order logic in which we can talk of agent properties as predicates and also use
modalities to talk of moves by agents. TML bears some interesting relationship with First
Order Modal Logic. As expected, this logic is highly undecidable and we will look at some
tight undecidability results. Further, we will discuss some interesting decidable fragments: the
monodic, the bundled and the two-variable fragments.
This is a joint work with Prof. R. Ramanujam.
Friday, May 3, 2019, in S2|15-201, at 13:30
Prof. em. Adrian R. D. Mathias, LIM, Université de la Réunion
Using generic extensions of ill-founded models of weak set theories to build standard models of the same
Tuesday, April 9, 2019, in S2|15-201, at 13:30
Miroslav Chodil, Masaryk University, Brno
The satisfiability problem for two simple fragments of Probabilistic CTL
PCTL is a probabilistic extension of the popular branching-time logic CTL. It has various practical uses, for example in formalizing behavior of stochastic systems. The problem of determining whether a PCTL formula is satisfiable is not known to be decidable, and has been open for decades. In this talk, we will focus on the satisfiability problem for two simple fragments of PCTL. For the first fragment, we will show that finite satisfiability is decidable. For the second fragment, we will discuss why the situation is perhaps not so simple, despite it being a reasonable extension of a fragment with particularly well-behaved models.
Friday, April 5, 2019, in S2|15-201, at 13:30
Prof. Martin Ziegler, KAIST School of Computing, Daejon
Representation Theory of Compact Metric Spaces and Computational Complexity of Continuous Data
Choosing an encoding over binary strings for input/output to/by a Turing Machine is usually straightforward and/or inessential for discrete data (like graphs), but delicate -- heavily affecting computability and even more computational complexity -- already regarding real numbers, not to mention more advanced (e.g. Sobolev) spaces. For a general theory of computational complexity over continuous data we introduce and justify QUANTITATIVE admissibility as requirement for sensible encodings of arbitrary compact metric spaces, a refinement of qualitative 'admissibility' due to [Kreitz&Weihrauch'85]:
An admissible representation of a T0 space X is a (i) continuous partial surjective mapping from the Cantor space of infinite binary sequences which is (ii) maximal w.r.t. continuous reduction. By the Kreitz-Weihrauch (aka “Main”) Theorem of computability over continuous data, for fixed spaces X,Y equipped with admissible representations, a function f:X→Y is continuous iff it admits continuous a code-translating mapping on Cantor space, a so-called REALIZER. We define a QUANTITATIVELY admissible representation of a compact metric space X to have (i) asymptotically optimal modulus of continuity, namely close to the entropy of X, and (ii) be maximal w.r.t. reduction having optimal modulus of continuity in a similar sense.
Careful constructions show the category of such representations to be Cartesian closed, and non-empty: every compact X admits a linearly-admissible representation. Moreover such representations give rise to a tight quantitative correspondence between the modulus of continuity of a function f:X→Y on the one hand and on the other hand that of its realizer: the MAIN THEOREM of computational complexity. This suggests (how) to take into account the entropies of the spaces under consideration when measuring algorithmic cost over continuous data.
This is joint work with Akitoshi Kawamura, Donghyun Lim, and Svetlana Selivanova.
Friday, March 8, 2019, in S2|15-201, at 13:30
Prof. Genaro López Acedo, University of Seville
An application of Proof Mining to Game Theory
In this talk we analyze a discrete pursuit-evasion game, based on Rado's lion and man celebrated problem. Using ideas stemming from proof mining, we prove that in uniformly convex bounded domains the lion always wins, and we extract a uniform rate of convergence for the successive distances between the lion and the man.
Literature:
[1] S. Alexander, R. Bishop and R. Ghrist, 'Total curvature and simple pursuit on domains of curvature bounded above', Geom. Dedicata 149 (2010) 275--290.
[2] U. Kohlenbach, G. López-Acedo and A.Nicolae, A quantitative analysis of the “Lion-Man'' game (submitted).
[3] G. López-Acedo, A. Nicolae, B. Piatek, ”Lion-Man'' and the Fixed Point Property, Geometria Dedicata. https://doi.org/10.1007/s10711-018-0403-9
[4] G. López-Acedo, A. Nicolae, B. Piatek, "Lion-Man'' and geodesic rays (in progress).
[5] J. E. Littlewood, Littlewood's Miscellany (ed: B. Bollobás) (Cambridge University Press, Cambridge 1986).
Friday, February 15, 2019, in S2|15-201, at 13:30
Silke Meißner, TU Darmstadt
Elementary Model-Theoretic Techniques for Inquisitive Modal Logic
Inquisitive modal logic INQML is an extension of modal logic ML, which allows modelling not only statements, but also more complex matters such as questions. We consider INQML from a model-theoretic perspective. A new, efficient standard translation from INQML into first-order logic FO is presented. This standard translation can be used to give a new and purely model-theoretic proof of compactness for INQML.
Tuesday, January 29, 2019, in S2|15-201, at 14:00
Prof. Yong Cheng, Wuhan University, School of Philosopy
Finding the limit of incompleteness for subsystems of PA
Gödel's incompleteness theorem is the most profound and important theorem in foundations of mathematics. There are still some open problems related to incompleteness. In this talk, I will firstly give an overview of the current state of research on incompleteness from different perspectives and classify some misinterpretations of Gödel's incompleteness theorem.
In the second part of the talk, I will focus on the open problem of finding the limit of incompleteness for subsystems of PA. Let T be a recursively axiomatizable consistent theory. We say that G1 holds for T iff for any recursively axiomatizable consistent theory S, if T is interpretable in S, then S is undecidable. It is well known that G1 holds for Robinson's Q and its sub-theory R. I examine the following question: whether there exists a theory with the minimal degree of interpretability for which G1 holds. As the first step toward this question, a natural question is: could we find a theory T such that G1 holds for T and the interpretation degree of T is less than the interpretation degree of R (i.e. R interprets T, but T does not interpret R). I will talk about some recent progress on these questions as well as some open questions.
Friday, November 23, 2018, in S2|15-201, at 13:30
Prof. Anuj Dawar, University of Cambridge
Definable Inapproximability: New Challenges for Duplicator
We consider the hardness of approximation of optimization problems from the point of view of definability. For many NP-hard optimization problems it is known that, unless P =3D NP, no polynomial-time algorithm can give an approximate solution guaranteed to be within a fixed constant factor of the optimum. We show, in several such instances and without any complexity theoretic assumption, that no algorithm that is expressible in fixed-point logic with counting (FPC) can compute an approximate solution. Since important algorithmic techniques for approximation algorithms (such as linear or semidefinite programming) are expressible in FPC, this yields lower bounds on what can be achieved by such methods. The results are established by showing lower bounds on the number of variables required in first-order logic with counting to separate instances with a high optimum from those with a low optimum for fixed-size instances.
This is joint work with Albert Atserias (Barcelona)
https://arxiv.org/abs/1806.11307
Friday, February 9, 2018, in S2|15-201, at 13:30
Dr. Florian Steinberg, INRIA
type-two poly-time and length revisions
“We present a new, and fairly simple characterization of the type-two poly-time computable functionals. While being machine based, it avoids the use of higher order objects as running times and instead uses an approach first put forward by Cook that sticks to first order functions to formulate time constraints. Cook's class is known to not only contain feasible operations and we thus impose further restrictions. The purpose of these restrictions is very simple: It is to avoid unbounded iteration of functional input. We prove that this leads to a subclass of all feasible functionals and that closing under composition leads back to the class of all feasible functionals. We also prove that the same is true for a more restrictive condition, namely for the notion of strong poly-time.”
This is joint work with Bruce Kapron.
Friday, February 2, 2018, in S2|15-201, at 13:30
Dr. Adriana Nicolae, University of Seville
“Lion Man” and the Fixed Point Property
In this talk we are concerned with the relation between the ?xed point property for continuous mappings and a discrete lion and man game played in a strongly convex domain. Our main result states that in locally compact geodesic spaces or in R-trees, the ?xed point property of the domain is equivalent to the success of the lion. At the end we review a few applications of proof mining in the context of geodesic metric spaces.
This talk is based on a joint work with Genaro López-Acedo (University of Seville) and Bozena Piatek (Silesian University of Technology).
Friday, January 26, 2018, in S2|15-201, at 13:30
Dr. Andrei Sipos
Contributions to proof mining
I will present the results of my PhD thesis, belonging to the area of proof mining, an applied subfield of mathematical logic. Proof mining (introduced and developed by U. Kohlenbach and his collaborators) aims to obtain quantitative information -- primarily witnesses and bounds for existentially quantified variables -- out of ordinary mathematical proofs of an apparently non-constructive nature. The results under discussion fall naturally within three categories. Firstly, there are some representative applications of such techniques in nonlinear analysis and convex optimization -- the extraction of rates of metastability, asymptotic regularity and even full convergence for some iterations studied in those fields, e.g. the parallel algorithm, the Ishikawa iteration or the proximal point algorithm. Secondly, as this kind of research can easily spill over in related areas, I present some advances in pure analysis, like a characterization of 2-uniformly convex Banach spaces or a general definition that encompasses all the known instances of resolvent operators used in convex optimization. Finally, I present some work in the foundations of proof mining, namely a suitable implicit characterization of the class of L^p spaces that is amenable to a logical treatment, i.e. it allows for a suitable metatheorem that underlies possible applications. This is followed by a derivation of the modulus of uniform convexity for this class of spaces from this axiomatization.
Some of these results are joint work with Laurentiu Leustean and Adriana Nicolae.
Friday, November 3, 2017, in S2|15-201, at 13:30
Anton Freund, University of Leeds,
From Cut Elimination to Pi^1_1-Comprehension
I will begin with an informal explanation of cut elimination and its use in the ordinal analysis of Peano arithmetic. The latter shows that Pi^0_2-soundness is equivalent to the primitive recursive well-foundedness of an ordinal notation system. On a conceptual level, we observe that the focus is on low logical complexity. Next, I explain how an ordinal notation system can be relativized to a given well-order. This brings us to the level of Pi^1_2-statements. Many well-known axioms of that complexity have been characterized in terms of relativized ordinal notation systems: from arithmetical comprehension (Girard, Hirst) up to the existence of omega-models of bar induction (Rathjen and Vizcaíno). Finally, I present a new result at the next type-level: Following a suggestion by Montalbán and Rathjen, I relativize an ordinal notation system to a relativized ordinal notation system. In this way one can get a characterization of Pi^1_1-comprehension, which is a Pi^1_3-statement (preprint available as arXiv:1704.01662).
Friday, October 6, 2017, in S2|15-201, at 13:30
Prof. Dr. Martin Ziegler, KAIST School of Computing
Formal Verification in Imperative Multivalued Programming over Continuous Data Types
(joint work with Gyesik Lee, Norbert Müller, Eike Neumann, Sewon Park, and Norbert Preining)
Inspired and guided by the iRRAM C++ library (Müller 2001), we formally specify a programming language for the paradigm of EXACT REAL COMPUTATION: reliably operating on encapsulated CONTINUOUS data types such as (not necessarily algebraic) real numbers --- imperatively and exactly (no rounding errors) with primitives computable in the sense of Recursive Analysis and a necessarily modified multivalued=non-extensional semantics of tests. We then propose a complete logical system over two types, integers and real numbers, for rigorous correctness proofs of such algorithms. We extend the rules of Hoare Logic to support the formal derivation of such proofs, and have them verified in the Coq Proof Assistant. Our approach is exemplified with three simple numerical example problems: integer rounding, solving systems of linear equations, and continuous root finding.
Wednesday, September 13, 2017, at 14:00 in S2|15-201
Eike Neumann
Continuous enclosures of discontinuous problems
The main theorem of computable analysis asserts that any computable function is continuous. Many interesting problems encountered in practice turn out to be discotinuous, however. It hence is a natural question how much partial information may be obtained on the solutions of a discontinuous problem in a continuous or computable way. To formalise this idea, we propose the notion of continuous enclosures: continuous multi-valued functions which take values in certain semilattices which encode partial information on the solutions of the original problem. We give some existence results and provide some explicit examples.
Friday, July 21, 2017, at 13:30 in S2|15-201
Julian Bitterlich, TU Darmstadt
Special classes of acyclic groups and their applications
I present some connections between results in monoid theory and in (finite) model theory. More specifically, we look at theorems about the profinite topology on free groups (M. Hall, Ribes and Zalesskii) and compare these to theorems concerning extension problems for partial automorphisms (Hrushovski, Herwig and Lascar). The two threads can be related through a suitable notion of acyclicity in groups, which provides yet another viewpoint on the topic.
Friday, July 7, 2017, at 13:30 in S2|15-201
Felix Canavoi, TU Darmstadt
A Modal Characterisation Theorem for Common Knowledge Logic
We investigate epistemic modal logic with common knowledge modalities for groups of agents and obtain two van Benthem style model-theoretic characterisation theorems, a classical version and one in the sense of finite model theory. In order to do that we show that basic modal logic is expressively equivalent to the bisimulation invariant fragment of first-order logic over the non-elementary class of so-called (finite) common knowledge Kripke frames. The transitive nature of common knowledge modalities poses a new challenge that cannot be tackled by the usual locality based techniques. For (finite) common knowledge Kripke frames we construct (finite) bisimilar companions that are based on Cayley graphs of suitable groups with specific acyclicity properties. These Cayley graphs possess a very intricate yet highly regular structure that allows for a locality analysis on multiple scales that are induced by distance measures w.r.t. various coalitions of agents.
Friday, June 30, 2017, at 13:30 in S2|15-201
Prof. Andrey Morozov, Sobolev Institute of Mathematics, Novosibirsk
Infinite time Blum-Shub-Smale machines for computability in analysis
(joint work with Peter Koepke)
We continue the study of the computational strength of infinite time Blum-Shub-Smale machines (ITBM) performing computations over the reals in ordinal time started by P.Koepke and B.Seyfferth. We give a characterization of ITBM-computable functions via iterated Turing jump and discuss the adequacy of this approach to our intuitive imaginations of computability used in analysis.
Friday, June 16, 2017, at 13:30 in S2|15-201
Prof. Rodney G. Downey, Victoria University of Wellington
Notes on Computability in Mathematics
We look at some recent advances in the understanding of the computational content of mathematics. I will mainly concentrate on problems in and around abelian groups.
Tuesday, April 11, 2017, at 11:00 in S2|15-201
Michal Konecny, Aston University
Representations for feasibly approximable functions
(joint work with Eike Neumann)
'Exact' continuous real functions can feature as first-class objects in computation in a similar way as so-called exact real numbers. While all natural representations of continuous functions used in computable analysis are computably equivalent, different admissible representations may fail to be equivalent under the polynomial-time reduction defined by Kawamura and Cook (2012). Building on work by Labhalla, Lombardi, and Moutai (2001) and Kawamura, Mueller, Roesnick, and Ziegler (2015), we review several common admissible representations of continuous real functions, study their polynomial-time reducibility lattice and benchmark their performance using AERN, a Haskell library for exact real computation. In particular, we show that rigorous approximation by piecewise polynomials is polynomial-time equivalent to rigorous approximation by rational functions. Each representation of continuous functions offers a trade-off between the ease of computing names of functions (ie writing) and the ease of extracting information about a function (ie reading). We formalise this concept by switching from representations to compositional evaluation strategies over term signatures. A signature includes a set of function operations, some of which just create functions, some read existing functions (eg computing the range) and some do both (eg pointwise addition). We reformulate some existing results about function representations as Pareto optimality with respect to certain signatures. We implemented evaluation strategies based on the chosen representations and we benchmark their performance on a small set of terms.

Wintersemester 2016/2017

Thursday, Februrary 16, 2017, at 15:30 in S2|15-201
Sam Sanders, LMU München
The crazy Reverse Mathematics of Nonstandard Analysis and Computability Theory
Reverse Mathematics is a program in the foundations of mathematics founded by Friedman. The goal is to find the minimal axioms required to prove a theorem of ordinary, i.e. non-set theoretic mathematics. Most such theorems fall into only five categories, called the “Big Five”, which are also linearly ordered. We show that the counterparts of the Big Five (and other systems) in Nonstandard Analysis behave very differently. These results are obtained via (and often reformulations of) results in higher-order computability theory. As an example, the nonstandard counterpart of WKL behaves like Kohlenbach’s generalisation of WKL, but with nonstandard continuity instead of the epsilon-delta variety. As a second example, nonstandard ACA_0 combined with resp. nonstandard WWKL and WWKL falls respectively below and above ATR_0.

Friday, January 20, 2017, at 13:30 in S2|15-201
Klaus Keimel, TU Darmstadt
Domain Theorie und die Cuntz Halbgruppe einer C*-Algebra
Im Jahre 2008 hat G. A. Elliott die von J. Cuntz schon 1978 eingefuehrte Halbgruppe einer C*-Algebra um eine topologische Information angereichert. Diese topologische Information ist domain-theoretischer Natur. Es war das Ziel von Elliott eine Invariante fuer C*-Algebren bereitzustellen, die feinere Unterscheidungen erlaubt als die bis dahin verwendeten Invarianten. Der Ansatz von Elliott ist inzwischen in zahlreichen Arbeiten weiter verfolgt worden. Nachdem mir manches bekannt vorkam, was von den C*-Leuten gemacht wird, habe ich begonnen, mich in die Materie einzuarbeiten. Was ich inzwischen verstanden habe, will ich versuchen zu erklaeren. Die zugrunde liegenden Ideen sind recht einfach.

Tuesday, December 20, 2016, at 17:00 in S2|15-201
Keita Yokoyama, Japan Advanced Institute of Science and Technology
On the Proof-Theoretic Strength of Ramsey's Theorem for Pairs
In the field of reverse mathematics, many people studied the strength of Ramsey's theorem and related combinatorial principles. Especially, determining the strength of Ramsey's theorem for pairs is a long-standing project. Recently, there are many important results on this topic both from computability theoretic view points and proof-theoretic view points. In this talk, I will mainly focus on the proof-theoretic strength and report some recent progress.

Friday, November 11, 2016, at 13:30 in S2|15-201
Andrei Sipos, IMAR, University of Bucharest
Proof mining in Lp spaces
We show how an equivalent, implicit, characterization of Lp Banach spaces that has the corresponding form such that it may be amenable to a logical treatment (i.e. some bounds on existential quantifiers) can be obtained. That characterization can be immediately transformed into an axiomatization for such spaces in a higher-order logical system, the kind of which is used in proof mining, a research program that aims to uncover the hidden computational content of mathematical proofs using tools from mathematical logic. The axiomatization is followed by a corresponding metatheorem in the style of proof mining. We then illustrate how these results may be applied.

Friday, November 4, 2016, at 13:30 in S2|15-201
Martin Ziegler, KAIST School of Computing
On Computability and Well-Posedness in Shape Optimization
(joint work with Seon Dongseong and Chansu Park, Seoul National University)
Shape Optimization is a contemporary topic in Numerical Engineering. We develop and investigate its algorithmic foundations, building on Recursive Analysis and Computability Theory of Compact Metric Spaces.

Sommersemester 2016

Friday, September 16, 2016, at 13:30 in S2|15-201
Martin Ziegler, KAIST School of Computing
Computational Complexity of (Functions on) Compact Metric Space
(joint work with Akitoshi Kawamura and Florian Steinberg)
We promote the theory of computational complexity on metric spaces: as natural common generalization of (i) the classical discrete setting of integers, binary strings, graphs etc. as well as of (ii) the bit-complexity theory on real numbers and functions according to Friedman, Ko (1982ff), Cook, Braverman et al.; as (iii) resource-bounded refinement of the theories of computability on, and representations of, continuous universes by Pour-El & Richards (1989) and Weihrauch (1993ff); and as (iv) computational perspective on quantitative concepts from classical Analysis: Our main results relate (i.e. upper and lower bound) Kolmogorov's entropy of a compact metric space $X$ polynomially to the uniform relativized complexity of approximating various families of continuous functions on $X$. The upper bounds are attained by carefully crafted oracles and bit-cost analyses of algorithms perusing them. They all employ the same representation (i.e. encoding, as infinite binary sequences, of the elements) of such spaces, which thus may be of own interest. The lower bounds adapt adversary arguments from unit-cost Information-Based Complexity to the bit model. They extend to, and indicate perhaps surprising limitations even of, encodings via binary string functions (rather than sequences) as introduced by Kawamura & Cook (SToC'2010,\S3.4). These insights offer some guidance towards suitable notions of complexity for higher types.

Friday, July 8, 2016, at 14:00 in S2|15-244
Matthias Schröder, TU Darmstadt
Berechenbare Analysis und Topologie (Antrittsvorlesung)
In diesem Vortrag werde ich die Rolle von Topologie in der Berechenbaren Analysis erörtern.
Ferner werde ich die Klasse der topologischen Räume diskutieren, für die eine sinnvolle
Berechenbarkeitstheorie möglich ist, und einige interessante Beispiele erläutern.

Friday, June 3, 2016, at 13:30 in S2|15-201
Vassilios Gregoriades, University of Turin
Recursive-theoretic characterizations of the Topological Vaught Conjecture (Inaugural lecture)
The celebrated problem of the Topological Vaught Conjecture is receiving increasing attention by the recursive-theoretic community. Recently Montalban, motivated by results of Spector, obtained a characterization of the latter problem in a recursive theoretic language. In this talk we discuss the problem and Montalban's characterization, we give some examples, and we present another characterization by the speaker.

Friday, April 22, 2016, at 13:30 in S2|15-201
Ivano Ciardelli, University of Amsterdam
Questions in Logic
Traditionally, logic is concerned with sentences of a particular kind, namely, statements. In this talk, I will argue that there is also an interesting role to be played in logic by questions. I will describe how, by suitably generalizing the classical notion of entailment to questions, we can capture not only the standard relation of logical consequence, which concerns specific pieces of
information, but also the relation of logical dependency, which concerns information types. I will also discuss how questions may be used in formal proofs as placeholders for generic information of a given type; by manipulating such placeholders, we may construct formal proofs of dependencies.
Interestingly, such proofs admit a kind of proofs-as-programs interpretation: they do not just witness the existence of a certain dependency, but actually encode a method for computing it. If time permits, I will conclude by giving a short overview of the research program on inquisitive logic, mentioning some ongoing research on the first-order and modal case.

Wintersemester 2015/2016

Friday, January 15, 2016, at 13:30 in S2|15-201
Arno Pauly, Clare College, Cambridge
Representations of analytic functions and Weihrauch reductions
The talk considers several representations of the analytic functions on the unit disk and their mutual translations. All translations that are not already computable are shown to be Weihrauch equivalent to closed choice on the natural numbers. Subsequently some similar considerations are carried out for representations of polynomials. In this case in addition to closed choice the Weihrauch degree LPO∗ shows up as the difficulty of finding the degree or the zeros.

Friday, December 18, 2015, at 13:30 in S2|15-201
Achim Blumensath, leaving TU Darmstadt for Masaryk University, Brno
Tree Algebras

Friday, December 4, 2015, at 13:30 in S2|15-201
Ulrik Buchholtz, Carnegy Mellon University
Towards Primitive Recursive Homotopy Type Theory
I shall present some results of ongoing work towards finding a dependent type theory of primitive recursive strength that is suitable for doing synthetic homotopy theory (so it should be a homotopy type theory). The key ingredient towards making this work is a kind of modality used to limit the strength of large eliminations. The motivations behind this work are to achieve a better understanding of univalent universes and to enable (constructive) reverse mathematics of parts of homotopy theory.

Friday, November 27, 2015, at 13:30 in S2|15-201
Kolloquium in remembrance of Thomas Ihringer
On Friday November 27 there will be a memorial colloquium for Thomas Ihringer, as part of the seminar of the logic group at TU-Darmstadt. It will take place in the math-building S215, room 201 or 244. Beginning at 1:30 pm, end around 5 pm.

13:30-13:45, Short introduction by Klaus Keimel

13:45-14:30, H. Peter Gumm (Philipps-Universität Marburg)
A weak pullback preservation property of weakly monadic set functors
Inspired by recent investigations on congruence modularity, we show that a weakly monadic functor F (the associativity law is not required) weakly preserves pullbacks of constant maps if and only if F(1)=1.

Coffee break

15:00-15:45, Jirí Tuma (Karlova Universita Praha)
Antichains and business intelligence
We will present an example how useful elementary theory of ordered sets can be in business intelligence. It is a case study of collaboration between algebra group at Charles University in Prague and a start-up company, also based in Czech Republic.

15:55-16:40 Ferdinand Ihringer (Justus-Liebig-Universität Gießen):
Erdös-Ko-Rado Sets in Finite Geometries
Let $N = \{ 1, \ldots, n \}$. An \emph{Erd\H{o}s-Ko-Rado sets} (EKR set) of $N$ is a family $Y$ of $d$-sets of $N$ which pairwise intersect non-trivially. A famous result by Erd\H{o}s, Ko and Rado shows \begin{align*} |Y| \leq \binom{n-1}{d-1} \end{align*} for $n \geq 2k$. There are many natural generalizations of EKR sets to other structures such as permutation groups or combinatorial designs. This talk will put its focus on EKR sets in vector spaces and polar spaces. Let $V$ be an $n$-dimensional vector space over a finite field of order $q$. A \emph{polar space} of rank $d$ over $V$ is defined by a non-degenerate, reflexive sesquilinear form (or some quadratic form) and consists of all subspaces which vanish on this form. The largest of these subspaces are called \emph{generators} and have dimension $d$. An EKR of a polar space is a set of pairwise non-trivially intersecting generators. Here the largest size of EKR sets in some Hermitian polar spaces is still open. The talk will start with discussing original EKR sets and its various proofs, which are already accessible to high school students, and will end with current research in vector spaces and finite classical polar spaces.

There will be no registration. Though, if you plan to attend, notice to logik(at)mathematik.tu-darmstadt.de would be wellcome.

Friday, November 6, 2015, at 13:30 in S2|15-201
Silvia Steila, University of Torino
Terminating via Ramsey's Theorem
The Termination Theorem by Podelski and Rybalchenko characterizes the termination of transition-based program as a property of well-founded relations. The original proof of this result is purely classical, as the argument requires Ramsey's Theorem for pairs.
In this talk, we investigate in suitable logical settings the Termination Theorem in order to extract bounds for terminations. First we present a fragment of Ramsey's Theorem for pairs which is provable in Second Order Intuitionistic Arithmetic. Then we briefly introduce bounds for the Termination Theorem via Spector's bar recursion. Eventually, we focus on termination analysis from the viewpoint of Reverse Mathematics.

Tuesday, October 13, 2015 at 14:30 in S2|15-201
Facundo Carreiro, University of Amsterdam
Fragments of Fixpoint Logics: Automata and Expressiveness
In this talk I will give a general overview of some techniques to obtain expressivity results for modal and second-order logics, with a focus on fixpoint logics and automata. Time permitting, I will cover the following topics:
(1) fragments of fixpoint logics: it is known that some logics can be translated to other more expressive logics; for example, propositional dynamic logic (PDL) can be translated to the modal mu-calculus, and first-order transitive-closure logic can be translated to first-order logic with fixpoints. Can we give a semantic and syntactic characterization of the image of these translations? why is this important? what kind of insight do we get and how can we use it?
(2) automata characterizations: Janin and Walukiewicz gave automata models which capture, respectively, the modal mu-calculus and monadic second-order logic (MSO). The general idea behind these automata is very versatile, and we'll see how to adapt it to get automata for other logics such as PDL and weak MSO.
(3) expressiveness modulo bisimilarity: the interest in such expressiveness questions stems from applications where transition systems model computational processes, and bisimilar structures represent the same process. Seen from this perspective, properties of transition structures are relevant only if they are invariant under bisimilarity. Janin and Walukiewicz used automata-theoretic techniques to show that the modal mu-calculus is the bisimulation-invariant fragment of MSO.
We show that, using the approach of (1) and (2), we can get analogous results for PDL, weak MSO and other interesting logics. I will not assume any previous knowledge of automata or fixpoint logics and I will focus on the conceptual aspects and motivations of the techniques.
My related papers can be found here: http://dblp.uni-trier.de/pers/hd/c/Carreiro:Facundo

Sommersemester 2015


Friday, September 11, 2015, at 13:30 in S2|15-201
Makoto Fujiwara, Japan Advanced Institute of Science and Technology
Classical uniform provability and intuitionistic provability for existence statements
In this talk, I would like to present my recent work including some ongoing attempts.
In the investigation of existence statements in computable or effective mathematics, the existence or non-existence of uniform constructions to give a witness from an arbitrary given instance of problems, is often a central issue.
Along the line of [1], we formalize this uniform provability in terms of finite-type arithmetic and try to characterize it by (formalized) intuitionistic provability.
The primary tools for this work are proof interpretations developed in [2].
[1] M. Fujiwara, Intuitionistic provability versus uniform provability in RCA, Lecture Notes in Computer Science, vol. 9136, pp. 186–195, 2015.
[2] U. Kohlenbach, Applied proof theory: Proof Interpretations and their Use in Mathematics, Springer Monographs in Mathematics, Springer-Verlag, 2008.

Friday, July 17, 2015, at 14:00 in S2|15-201
Daniel Körnlein
Quantitative Results for the Variational Inequality Problem
We provide a quantitative treatment for the Variational Inequality Problem over the ?xed point set of a nonexpansive mapping on Hilbert space. In particular, we give a rate of meta-stability (in the sense of Tao) both for a resolvent-type implicit scheme and for the Hybrid Steepest Descent Method. The results are extracted from a theorem due to I. Yamada [2] and were obtained using proof mining techniques from mathematical logic [1].
[1] U. Kohlenbach: Applied Proof Theory: Proof Interpretations and their Use in Mathematics. Springer Monographs in Mathematics, 2008.
[2] I. Yamada: The hybrid steepest descent method for the variational inequality problem over the intersection of ?xed point sets of nonexpansive mappings. In Y. C. Dan Butnariu and S. Reich, editors, Inherently Parallel Algorithms in Feasibility and Optimization and their Applications, volume 8 of Studies in Computational Mathematics, pages 473 – 504. Elsevier, 2001.

Friday, July 17, 2015, at 13:30 in S2|15-201
Angeliki Koutsoukou-Argyraki
Approximate common fixed points and rates of asymptotic regularity for one-parameter nonexpansive semigroups
In this talk we present a recent work [1] that involves an application of proof mining to ?xed
point theory and to nonlinear semigroups. This is joint work with Ulrich Kohlenbach.
We extract quantitative information on the approximate common ?xed points of a one-
parameter strongly continuous semigroup of nonexpansive mappings (nonexpansive semi-
group) T(t) : t ≥ ? 0 on a subset C of a Banach space E, under the assumption that for each
x ∈ C the mapping t → T(t)x from [0;∞] into C is uniformly continuous on each compact
interval [0;K] for all K ∈ N and moreover given a b ∈ N it has a common modulus of uni-
form continuity for all x ∈ C such that ||x|| ≤ ? b. This is achieved by logical analysis of the
proof of a theorem by Suzuki in [2]. We then apply our result to extract rates of asymptotic
regularity for the nonexpansive semigroup T(t) : t ≥? 0 on a convex C ⊆ E with respect to
the Krasnoselskii iteration.
[1]Kohlenbach,U. and Koutsoukou-Argyraki,A: Approximate common fixed points and rates of asymptotic regularity for one-parameter nonexpansive semigroups, Submitted preprint.
[2]Suzuki,T. : Common fixed points of one-parameter nonexpansive semigroups, Bull. London Math. Soc. 38 1009-1018(2006).

Thursday, July 9, 2015, at 17:00 in S2|15-201
Prof. Hans-Peter A. Künzi, University of Cape Town
A construction of the B-completion of a T0-quasi-metric space
We discuss a construction of the so-called B-completion of a T0-quasi-metric space (X; d) that relies on the injective hull of (X; d). The B-completion of a T0-quasi-metric space (X; d) is a T0-quasi-metric extension of (X; d) originally investigated by H.-P.A. Kunzi and C. Makitu Kivuvu.
In general, the B-completion of a T0-quasi-metric space is larger than the better known bicompletion of (X; d): In the special case of a balanced T0-quasi-metric space the B-completion coincides with a completion that was fi?rst studied by D. Doitchinov.

Friday, June 19, 2015, at 13:30 in S2|15-201
Prof. Dr. Peter Cholak, Notre Dame University
On Friedberg Splits
We will focus splitting of computably enumerable (c.e.) sets. Let $\sqcup$ be disjoint union. A split $A_0 \sqcup A_1 =A$ is trivial if one of the $A_i$ is computable. Every c.e. set has a number of trivial splits. It is known that every non-computable c.e. set can be uniformly split into two c.e. non-computable set via a Friedberg split. It is known that many but not all c.e. sets have a nontrivial non Friedberg split. We show that is it is impossible to uniformly splits all the c.e. sets so that if a c.e. set has a non-Friedberg split our procedure provides such a split.

Tuesday, June 16, 2015, at 15:30 in S2|15-201
Erkal Selman, RWTH Aachen
Graphs Identified by Two-Variable Logic with Counting
We classify graphs and, more generally, finite relational structures that are identified by C^2, the two-variable first-order logic with counting. Using this classification, we show that it can be decided in almost linear time whether a structure is identified by C^2. There is a natural and well-known complete invariant for the logic C^2. We provide simple constructions to show that this invariant admits linear time inversion.
(Joint work with Sandra Kiefer and Pascal Schweitzer)

Friday, June 12, 2015, at 13:30 in S2|15-201
Prof. Russel G. Miller, Queens College, City University of New York
Functors and Effective Interpretations in Model Theory
We draw connections between two related notions. An \emph{effective interpretation} of a countable structure A in another countable structure B uses exactly the notion of interpretation from model theory, except that now the domain of the interpretation is allowed to use tuples from B of arbitrary finite length, and that the formulas to be used must be computable infinitary Sigma_1 formulas, rather than finitary formulas of arbitrary complexity. A \emph{computable functor}, from the category Iso(B) of all structures with domain omega isomorphic to B to the corresponding category Iso(A), is given by two Turing functionals, one mapping objects from Iso(B) to objects in Iso(A) and the other mapping isomorphisms between objects in Iso(B) to isomorphisms between the corresponding objects in Iso(A). Recent work by Harrison-Trainor, Melnikov, Montalb\'an and the speaker has shown these two concepts to very tightly related. An effective interpretation of A in B clearly yields a computable functor from Iso(B) to Iso(A). We will describe the converse, without excessive detail, and consider how these notions may be extended to continuous functors and interpretations by L_{\omega_1\omega} formulas in general.

Wintersemester 2014/15

February 13, 2015, at 13:30 in S2|15-201
Peter Selinger, Dalhousie University
Number-theoretic methods in Quantum Circuit Synthesis
Recall that SO(3) is the group of orthogonal transformations of 3-space, or equivalently, the group of symmetries of the sphere. Consider the subgroup C of SO(3) that is spanned by 45-degree rotations about the x-, y-, and z-axes (“moves”). It is easy to see that C is dense in SO(3). This means that for every U in SO(3), there exists some sequence W of moves such that | W – U | < epsilon, say in the operator norm. Consider the problem of actually computing such a sequence of moves, given U and epsilon. This is an important problem in quantum computation, called the approximate quantum circuit synthesis problem. Typically we are interested in algorithms that are efficient (say, whose runtime is polynomial in log(1/epsilon)), and which yield a reasonably short sequence of moves. The naive solution (which arises from the straightforward proof of density) requires O(1/epsilon) moves, which is exponentially longer than necessary. Until recently, the standard solution to the problem was the so-called Solovay-Kitaev algorithm, which is based on geometric ideas, namely commutators from Lie groups. The Solovay-Kitaev algorithm requires O(log^c(1/epsilon)) moves, where c is approximately 3.97. On the other hand, there is an information-theoretic lower bound showing that at least 3 log (1/epsilon) + K moves are required, where K is some constant. It was an open problem for 17 years whether the exponent c in the complexity of the Solovay-Kitaev can be reduced to 1. Julien Ross and I recently solved this problem in the affirmative. Using methods from undergraduate algebraic number theory, we found an efficient approximation algorithm requiring only O(log(1/epsilon)) moves. In addition to removing the exponent c, we also removed the multiplicative constant as well: our algorithm requires OPT2 + O(log(log(1/epsilon))) moves, where OPT2 is the length of the second-to-optimal solution. Moreover, if a quantum computer is available, our algorithm computes the actual optimal solution to the approximate synthesis problem. Similar algorithms have since been found for certain other finitely generated subgroups of SO(3) and U(2) as well.

February 6, 2015, at 15:00 in S2|15-201
Victor Selivanov, IIS, Novosibirsk
Towards the effective descriptive set theory
We prove effective versions of some classical results about measurable functions (in particular, any computable Polish space is an effectively open effectively continuous image of the Baire space, and any two perfect computable Polish spaces are effectively Borel isomorphic). We derive from this extensions of the Kleene-Suslin theorem, and of the effective Hausdorff theorem for the computable Polish spaces (this was recently established by Becher and Grigorieff with a different proof) and for the computable omega-continuous domains (this answers an open question from the paper by by Becher and Grigorieff).

January 30, 2015, at 13:30 in S2|15-201
Hugo Férée, Loria, Nancy
Game semantics approach to higher order complexity
Although notions of complexity have been defined and well studied for first and second order functions, it is not the case for higher order functions for third order or more. There is in fact no unique notion of computability for such functions, especially because several classes of functions (partial, total, hereditarily total, sequential, etc.) to consider. We propose a definition of complexity based on game semantics, which in particular applies to innocent strategies, hence extending this notion to PCF. We also propose a class of polynomial time complexity for higher order functions which verifies several expected results and seems to be a good candidate to apprehend the notion of feasibility at higher types.

January 16, 2015, at 13:30 in S2|15-201
Hugo Férée, Loria, Nancy
Understanding the complexity of real norms
Computability and complexity have been defined over many objects, like real numbers or real functions. For these notions, there are several results helping understand what these notions imply. Indeed, computability is related to continuity, and complexity is linked to analytical properties in many cases. The notion of complexity has recently been extended to real operators (C[0,1] -> R) and few results exists to understand this new notion. We will focus on the case of real norms and define the analytical properties of such objects which are related to their (deterministic and nondeterministic) complexity.

November 14, 2014, at 13:30 in S2|15-201
Svetlana Aleksandrova, Ershov Institute of Informatics Systems, Novosibirsk
On Sigma-definability of hereditarily finite superstructures
This talk contains basic notions and methods of
Sigma-definability in admissible sets and hereditarily
finite superstructures. The main result states that any
relation Sigma-definable in a hereditarily finite
superstructure HF(M) can also be defined by
Sigma-formula in a hereditarily finite list superstructure
HW(M) and vice versa.

November 7, 2014, at 13:30 in S2|15-201
Vladislav Amstislavskiy, Ershov Institute of Informatics Systems, Novosibirsk
Decidability Problem for Theories of Continuous and Continuously Differentiable Functions
The topic mostly concerns decidability problem for the elementary theory of continuous functions over reals. It is proved that the theory of C(R) (continuous functions from R to R) with pointwise order is decidable. Then, it is demonstrated that the theories of continuous functions and open sets are m-equivalent for quite a wide class of spaces. Finally we consider the theory of continuously differentiable functions and demonstrate its decidability.

October 31, 2014, at 13:30 in S2|15-201
Johannes Schramm, TU Darmstadt
Logic Formalization and Automated Deductive Analysis of Business Rules
Automated formal verification of certain properties of business rule management systems (BRMS) is demanded by companies using such systems in productive environments. We implement this process for certain termination properties of the BRMS Drools. Syntax and structural operational semantics for fragments of the Drools Rule Language (DRL) are defined and used to prove a termination criterion for DRL.

October 24, 2014, at 13:30 in S2|15-201
Benjamin Seyfferth, TU Darmstadt
Models of Ordinal Computation
With this talk I would like to present a brief survey of three different models of ordinal computability: Ordinal Turing machines are employed to study sets of reals, Blum-Shub-Smale machines are equipped with the capability of forming continuous limits and Lambda-calculus is extended to handle transfinitely many conversions. A possible topic for a subsequent discussion could be the strong emphasis on the computational paradigm that distinguishes ordinal computability from classical work on generalized recursion theory.

Thursday, October 23, 2014, at 11:40 in S2|15-201
Adrian R. D. Mathias, Université de la Réunion, at present University of Freiburg
Rudimentary Recursion, Provident Sets and Forcing

Sommersemester 2014

Monday, September 29, 2014, at 14:30 in S2|15-234
Margarita Korovina, Institute of Informatics Systems, Novosibirsk
Index sets for continuous data
We will talk about our recent research on the theory of index sets in the context of computable analysis considering classes of effectively open sets and computable real-valued functions. First, we construct principal computable numberings for effectively open sets and computable real-valued functions. Then, we calculate the complexity of index sets for important problems such as root realisability, set equivalence and inclusion, function equivalence which naturally arise in continuous constraint solving. Using developed techniques we give a direct proof of the generalised Rice-Shapiro theorem for effectively open sets of Euclidean spaces and present an analogue of Rice's theorem for computable real-valued functions. We illustrate how index sets can be used to estimate complexity of continuous data problems in the settings of the Kleene-Mostowski arithmetical hierarchy.

Friday, July 18, 2014, at 13:30 in S2|15-201
Matthew de Brecht, CiNet, NICT, Osaka
Compact saturated subsets of quasi-Polish spaces
Hyperspaces of compact sets have many interesting applications, ranging from object-classification algorithms in computer vision based on the Hausdorff metric, to the modeling and analysis of non-deterministic programs in domain theory. However, the analysis of these hyperspaces has been mostly divided into research on either complete metric spaces or on continuous domains.
Here we will aim for a common framework by initiating research on the hyperspaces of compact subsets of quasi-Polish spaces. As a first step in this direction, we will show that the hyperspace of saturated compact subsets of a quasi-Polish space is also quasi-Polish when equipped with the upper Vietoris topology. Furthermore, we will show that the Scott-open filters (which can be identified with the compact saturated sets via the Hofmann-Mislove Theorem) form a basis for the Scott-topology on the open set lattice of a quasi-Polish space.
(This talk contains results from joint work with M. Schröder and V. Selivanov).

Friday, July 11, 2014, at 13:30 in S2|15-201
Takayuki KIHARA, Japan Advanced Institute for Science and Technology, (JAIST)
Scott ideals in infinite dimensional topology
A topological space X is piecewise homeomorphic to Y if there are countable covers (X_i), (Y_i) of X and Y such that each piece X_i is homeomorphic to Y_i. Every (trans-)finite dimensional uncountable Polish space is piecewise homeomorphic to Cantor space, and most known proper-infinite dimensional Polish spaces are piecewise homeomorphic to Hilbert cube. Motto Ros et al. asked whether there is another piecewise homeomorphism type of an uncountable Polish space. We solve this problem by using computability theory with the help of the notion of Scott ideal (omega-model of RCA_0+WKL). We will also see that our space is piecewise homeomorphic to R. Pol's counterexample to P. Alexandrov's problem (a weakly infinite dimensional compact metrizable space which is not decomposable into countably many finite dimensional subspaces). By extending our idea, we also show that there are uncountably many piecewise homeomorphism types of compact metrizable spaces.

Thursday, July 10, 2014, at 11:00 in S2|15-201
Prof. Dr. Walter Roth, University Brunei-Darussalam
Localkonvexe Kegel: eine Einführung
Dies ist eine Einführung in die Grundlagen der Theorie lokalkonvexer Kegel, eine Verallgemeinerung der etablierten und weitentwickelten Theorie lokalkonvexer topologischer Vektorräume. Die Motivation kommt von Konzepten, die algebraisch nahe an der Struktur von Vektorräumen liegen, in denen aber Negative der Elemente nicht immer existieren und deshalb die Multiplikation nur mit positive Skalaren definiert ist. Beispiele dafür sind u.a. Klassen von Funktionen, die den Wert unendlich annehmen, oder Mengen konvexer Teilmengen eines Vektorraumes. Falls in solchen Strukturen die Kürzungsregel nicht gilt, können sie nicht in Vektorräume eingebettet werden und Resultate und Techniken für Vektorräume sind nicht unmittelbar anwendbar. Die Theorie lokalkonvexer Kegel erfasst solche Beispiele und erlaubt die Entwicklung einer Dualitätstheorie ähnlich der in lokalkonvexen Vektorräumen.

Friday, June 27, 2014, at 13:30 in S2|15-234
Professor Dana S. Scott, UC Berkeley
Stochastic Lambda-Calculus
It is shown how the operators in the “graph model” for lambda-calculus (which can function as a programming language for Recursive Function Theory) can be expanded to allow for “random combinators”. The result then is a model for a new language for random algorithms.

Friday, June 13, 2014, at 13:30 in S2|15-201
Yasuyuki Tsukamoto, Kyoto University, Japan
Imaginary Hypercubes
Imaginary cubes are three-dimensional objects that have square projections in three orthogonal ways just as a cube has. We study their higher-dimensional extensions. In n-dimensional case, we say that an object A is an imaginary n-cube of an n-cube C if A has n-1-cube projections in n orthogonal ways just as C has. It is known that, if an object A is an imaginary 3-cube of two different 3-cubes, then A is a subset of the intersection of the two cubes which share one of their diagonals. We show that this proposition is true also in higher-dimensional cases, and that it cannot happen in five- or more dimensional cases. We also talk about fractal imaginary n-cubes induced by Latin n-1-cubes.

Friday, May 16, 2014, at 13:30 in S2|15-201
Felix Canavoi, RWTH Aachen
Defining Winning Strategies
We study the definability of winning strategies for infinite games in least fixed-point logic. In order to do that we specify what is meant by defining winning strategies in a logic. We show that the problems of defining winning regions and defining winning strategies are not always equivalent, although they are closely connected. We show that winning strategies are definable for reachability games, safety games, and parity games with a bounded number of priorities. Furthermore, we explore the case of parity games with an unbounded number of priorities. For this class, winning strategies are not LFP-definable in general, but on finite game graphs winning strategies are LFP-definable if, and only if, they are computable in polynomial time.

WS 2013/14

Friday, March 28, 2014, in S2|15-201 at 13:30
Arno Pauly, Clare College Cambridge, UK
On function spaces and polynomial-time computability
In the context of second-order polynomial-time computability, we prove that there is no general function space construction. We proceed to identify restrictions on the domain or the codomain that do provide a function space with polynomial-time function evaluation containing all polynomial-time computable functions of that type. As side results we show that a polynomial-time counterpart to admissibility of a representation is not a suitable criterion for natural representations, and that the Weihrauch degrees embed into the polynomial-time Weihrauch degrees.

Friday, February 21, 2014, in S2|15-201 at 13:30
Alexey Ostrovskiy, Universität der Bundeswehr München
An Alternative Approach to the Decomposition of Functions
Let a function f:X->Y with compact fibers take any open set into a union of an open and a closed set. Then there exist subsets T_n of X
(n = 1,2,…) such that every restriction f|T_n is an open or closed function onto some subset Y_n and the Y_n (n = 0,1,…) cover Y.
This result is related to the problem of preservation of Borel classes by open-Borel functions and to Luzin's question about decomposition of Borel-measurable functions into countably many continuous ones.

Friday, February 14, 2014, in S2|15-201 at 13:30
Vassilios Gregoriades, TU Darmstadt
A recursive theoretic view to the decomposability conjecture
The decomposability conjecture states that every function from an analytic space to a separable metric space, for which the preimage of a Σ^0_{m+1} set is a Σ^0_{n+1} set, (1<=m<=n), is decomposable into countably many Σ^0_{n-m+1}-measurable functions on Π^0_n domains. In this talk we present recent results about this problem based on work of Kihara and the speaker. The key tools for the proofs come from recursion theory and effective descriptive set theory, and include a Lemma by Kihara on canceling out Turing jumps and Louveau separation.

Friday, February 7, 2014, in S2|15-201 at 13:30
Prof. Anuj Dawar, University of Cambridge, UK
Symmetric Circuits and Fixed-Point Logics
We study queries on graphs (and other relational structures) defined by families of Boolean circuits that are invariant under permutations of the vertices. In particular, we study circuits that are symmetric, that is, circuits whose invariance is explicitly witnessed by automorphisms of the circuit induced by the permutation of their inputs. We show a close connection between queries defined on structures by uniform families of symmetric circuits and definability in fixed-point logics. Joint work with Matthew Anderson.

Friday, January 24, 2014, in S2|15-201 at 13:30
Prof. Keita Yokoyama, JAIST, Japan
On the second-order categoricity of the natural number system
It is important in the foundations of mathematics that the natural number system is characterizable as a system of 0 and a successor function by second-order logic. In other words, the following Dedekind's second-order categoricity theorem holds: every Peano system (P,e,F) is isomorphic to the natural number system (N,0,S). In this talk, I will investigate Dedekind's theorem and other similar statements. We will first do reverse mathematics over RCA0, and then weaken the base theory. This is a joint work with Stephen G. Simpson and Leszek Kolodziejczyk.

Friday, December 20, 2013, in S2|15-201 at 13:30
Yasayuki Tsukamoto, Kyoto University
Minimal Oriented Matriods with Disconnected Realization Space

Tuesday, December 17, 2013, in S2|15-201 at 13:00
Yasayuki Tsukamoto, Kyoto University
A Hausdorff space with a strongly independent dyadic subbase
Let X be a second countable, Hausdorff space. For two points of X, we can separate them by a pair of open subset of X which are exteriors of each other. A dyadic subbase of X is a subbase composed of countable pairs of such open subsets. A dyadic subbase induces a domain representation of the topological space X. I talk about the properties of this representation, and give a strange example for it.

Friday, December 13, 2013, in S2|15-201 at 13:30
Prof. Dr. Yichuan Yang, Beihang University, Beijing, China, at present Stuttgart University
Non-commutative logical algebras and algebraic quantales
Quantum B-algebras are partially ordered implication algebras which are equivalent to subreducts of quantales. They were introduced axiomatically in [Rump-Yang: Ann. Pure Appl. Logic 165 (2014)]. They provide a unified semantics for non-commutative algebraic logic, and they cover the vast majority of implication algebras. Partially defined products on algebras related to effect algebras are handled efficiently in this way. The results of Galatos-Jonsson-Tsinakis on the splitting of generalized BL-algebras into a semidirect product of a po-group operating on an integral residuated poset are extended to a characterization of twisted semidirect products of a po-group by a quantum B-algebra.

Friday, December 6, 2013, in S2|15-201 at 13:30
Stepahne Le Roux, TU Darmstadt
Lazy beats crazy: a spurious yet mathematical justification for laissez-faire
In perfect-information games in extensive form, common knowledge of rationality triggers backward induction, which yields a Nash equilibrium. That result assumes much about the players’ knowledge, while it holds only for a subclass of the games in extensive form. Alternatively, this article defines a non-deterministic evolutionary process, by myopic and lazy improvements, that settles exactly at Nash equilibrium (in extensive form). Importantly, the strategical changes that the process allows depend only on the structure of the game tree, they are independent from the actual preferences. Nonetheless, the process terminates if the players have acyclic preferences; and even if some preferences are cyclic, the players with acyclic preferences stop improving eventually! This result is then generalised in games played on DAGs or infinite trees, and it is also refined by assigning probabilities to the process and perturbing it.

Friday, November 22, 2013, in S2|15-201 at 13:30
Anna-Sophie Heinemann, University of Paderborn
Boole and Jevons on Logical Method
Today's notion of formal logic is coined in the 19th Century. In one sense, this concerns the form of logic: which means its shape and operational techniques. But it also relates to the conception of logical form: the scope of meanings imputable to propositional forms. The same understanding of logical form allows for different methods which may arrive at the same results. However, the techniques differ in some important respects, the most obvious of which seems to lie in an appeal to the notions of functions and variables. In this talk, two approaches will be illustrated by example of George Boole's (quasi-)algebraic calculus of logic and what William Stanley Jevons terms a method of 'indirect deduction,' to be accomplished by mechanical devices such as his 'logical piano.'

Friday, October 11, 2013, in S2|15-201 at 13:30
Dr. Rupert Hölzl, Universität der Bundeswehr München
Absolutely undecidable sets
An infinite binary sequence A is absolutely undecidable if it is impossible to compute A on a set of positions of positive upper density. Absolute undecidability is a weakening of bi-immunity. Downey, Jockusch and Schupp asked whether, unlike the case for bi-immunity, there is an absolutely undecidable set in every non-zero Turing degree. We provide a positive answer to this question by applying techniques from coding theory. We show how to use Walsh-Hadamard codes to build a truth-table functional which maps any sequence A to a sequence B, such that given any restriction of B to a set of positive upper density, one can recover A. This implies that if A is non-computable, then B is absolutely undecidable. Using a forcing construction, we show that this result cannot be strengthened in any significant fashion.
(Joint work with Laurent Bienvenu and Adam R. Day)

Friday, October 4, 2013, in S2|15-201 at 13:30
Prof. Paul Potgieter, University of South Africa
The theory of complex oscillations
Complex oscillations, also known as algorithmically random Brownian motion, were first formulated by Asarin and Pokrovskii. The work of Fouche established a bijection between the sets of complex oscillations and Kolmogorov-Chaitin random strings. In this talk, we shall discuss the foundations and basic tools of the theory (such as effectively generated algebras) and survey some of the subsequent work, much of which concerns itself with the fractal properties of complex oscillations.

SS 2013

Friday, September 6, 2013, at 13:30 in S2|15-201
Johanna Sokoli, TU Darmstadt
Satisfiability of cross product terms is complete for real nondeterministic polytime Blum-Shub-Smale machines
Nondeterministic polynomial-time Blum-Shub-Smale Machines over the reals give rise to natural a discrete complexity class between NP and PSPACE. Several problems have been shown complete (under many-one reduction by polynomial-time Turing machines) for this class -- although mostly problems from real algebraic geometry / polynomial systems (e.g. by Pascal Koiran); notable exceptions being “stretchability of pseudolines” (Peter Shor), “realizability of chirotopes” (Nikolai Mnev/Juergen Richter-Gebert), or “satisfiability in quantum logic”. We exhibit a new one based on questions about expressions built from cross products only.
http://mcu2013.ini.uzh.ch/Program.html

Monday, August 19, 2013, at 14:00 in S2|15-201
Ruy de Queiroz, Federal University of Pernambuco, Brazil
Propositional equality, identity types, and direct computational paths
In proof theory the notion of canonical proof is rather basic, and it is usually taken for granted that a canonical proof of a sentence must be unique up to certain minor syntactical details(such as, e.g., change of bound variables). When setting up a proof theory for equality one is faced with a rather unexpected situation where there may not be a unique canonical proof of an equality statement. Indeed, in a(1994--5)proposal for the formalisation of proofs of propositional equality in the Curry--Howard style, we have already uncovered such a peculiarity. Totally independently, and in a different setting, Hofmann & Streicher (1994) have shown how to build a model of Martin-Löf's Type Theory in which uniqueness of canonical proofs of identity types does not hold. The intention here is to show that, by considering as sequences of rewrites and substitution, it comes a rather natural fact that two(or more)distinct proofs may be yet canonical and are none to be preferred over one another. By looking at proofs of equality as rewriting(or computational)paths this approach will be in line with the recently proposed connections between type theory and homotopy theory via identity types, since elements of identity types will be, concretely, paths (or homotopies).
(Joint work with Anjolina G. de Oliveira)

Friday, July 5, 2013, at 13:30 in S2|15-201
Aziz Erkal Selman, RWTH Aachen
On Fractional Isomorphisms

An isomorphism from a graph G to a graph H can be seen as a permutation matrix X satisfying the equation AX = XB, where A and B are adjacency matrices of G and H, respectively. This means that Graph Isomorphism Problem can be stated in terms of 0-1 integer linear programming. The solutions to the relaxation of this integer program are called fractional isomorphisms. In 1994, Ramana, Scheinerman and Ullman proved that there is a fractional isomorphism between two graphs, if and only if the graphs cannot be distinguished by naive vertex classification, which is a widely used, intuitive method for isomorphism testing. I am going to present a new elementary proof of this theorem. There are other characterisations of the same concept, including a logic and a game characterization. I will briefly talk about them and give a new combinatorial characterization: two graphs are fractionally isomorphic, if and only if there is a so called migration from one to the other.

Thursday, July 4, 2013, at 15:00 in S2|15-201
Florian Steinberg
Computational Complexity of Poisson's Equation
The last years have seen an increasing interest in classifying (existence claims in) classical mathematical theorems according to their strength -- e.g. proof-theoretically or in terms of Weihrauch-degrees of computability.
We pursue the question under the refined view of asymptotic computational worst-case complexity. Ko&Friedman have for instance shown that the indefinite integral of a polynomial-time computable function is again polynomial-time computable iff FP=#P, a complexity class (believed to lie strictly) between NP and PSPACE. We prove that the same holds for the solution of Poisson's Equation on the Euclidean unit ball.
(Joint work with Akitoshi Kawamura and Martin Ziegler)

Friday, June 28, 2013, at 13:30 in S2|15-201
Eike Neumann
The Weihrauch-Degree of the Browder-Göhde-Kirk Theorem in Separable Hilbert Space
Let K be a closed, convex and bounded subset of a separable Hilbert Space. The Browder-Göhde-Kirk Theorem asserts any nonexpansive f:K->K to have a fixed point. [Kohlenbach'01] studied the computational content of the Krasnoselski and Ishikawa fixed point theorems for nonexpansive mappings in uniformly convex normed spaces. He observed that the iterations employed there fail to yield uniform (i.e. depending only on f) rates of convergence already in the case where K = [0;1]. This led him to extract instead rates of so-called asymptotic regularity. On the other hand, since the set of fixed points is known to be convex, it is easy to see that any computable nonexpansive f:[0;1]^d->[0;1]^d has a computable fixed point; cmp. [LeRoux&Ziegler'08]. We show that the Browder-Göhde-Kirk Theorem is computationally equivalent to the principle of Weak Convex Choice in terms of so-called Weihrauch reductions, introduced by [Brattka&Gherardi'09]: A (possibly multivalued, i.e. non-extensional) mapping g:X=>Y between represented spaces X and Y is said to REDUCE to a mapping h:U=>V if there exist computable (multivalued) functions K:VxX=>Y and L:X=>U such that K(h(L(x)),x) = g(x) for all x in X. Convex Choice is the restriction of the so-called Closed Choice principle to convex sets. Closed Choice (in a separable metric space) in turn is a uniform version of the problem of finding some point in a non-empty co-r.e. closed set: Given negative information in form of an enumeration of open rational balls avoiding at least one point, produce such a point. Note that on Cantor Space this is equivalent to Weak König's Lemma and hence uncomputable. Weak Convex Choice is a variant of Convex Choice with respect to the weak topology.

Friday, June 21, 2013, at 13:30 in S2|15-201
Angeliki Koutsoukou-Argyraki, University of Copenhagen
Automorphisms of the Calkin algebra under two different set-
theoretic hypotheses; an independence result
A question by Brown, Douglas and Fillmore from the late 1970's asks if the
Calkin algebra has outer automorphisms. It has recently been shown that the answer to this question in undecidable in ZFC; Phillips and Weaver (2007) showed that assuming the Continuum Hypothesis the Calkin algebra has outer automorphisms, while Farah (2011) assuming Todorcevic's Axiom showed that it is also consistent with ZFC that all automorphisms of the Calkin algebra are inner and moreover provided an alternative proof to Phillips' and Weaver's result.
In this talk I will sketch the proofs in Farah's paper.

Friday, June 14, 2013, at 14:00 in S2|15-201
Florian Steinberg
Computable analytic functions in iRRAM
It is well known that knowing only the coefficient sequence is insufficient to computably evaluate the induced power series, even for fixed radius of convergence. This can be mended by enriching a name of a real sequence with suitable discrete advice. More precisely arXiv:1211.4974 has exhibited integers quantities serving both as enrichment and as gauge for uniform parameterized polynomial time algorithms evaluating, adding, multiplying, composing, differentiating, integrating, and maximizing real analytic functions. iRRAM is a C++ library for arbitrary precision real arithmetic and analysis, allowing to compute limits of sequences of real numbers and functions. The above combination of continuous and discrete information canonically yields a declaration of a C++ class interface for real analytic functions. We describe an implementation of such a class and of the standard arithmetic operations thereon as well as differentiation, integration and evaluation operators -- and several optimizations.

Friday, June 14, 2013, at 13:30 in S2|15-201
Carsten Rösnick
Parameterized Uniform Complexity in Numerics: from Smooth to Analytic, from NP-hard to Polytime
The synthesis of classical Computational Complexity Theory with Recursive Analysis provides a quantitative foundation to reliable numerics. Here the seemingly simple operators of maximization, integration, and solving ordinary differential equations are known to map (even high-order differentiable) polynomial-time computable univariate functions to instances which are 'hard' for classical complexity classes NP, #P, and CH; but, restricted to analytic functions, map polynomial-time computable ones to polynomial-time computable ones -- non-uniformly! We investigate the uniform parameterized complexity of the above operators in the setting of Weihrauch's TTE and its second-order extension due to Kawamura&Cook (2010). That is, we explore which (both continuous and discrete, first and second order) information and parameters on some given f is sufficient to obtain similar data on Max(f) and integral(f); and within what running time, in terms of these parameters and the guaranteed output precision 1/2^n$. It turns out that Gevrey's classical hierarchy of functions climbing from analytic to smooth corresponds to the computational complexity of maximization growing from polytime to NP-hard. Proof techniques involve mainly the Theory of (discrete) Computation, Hard Analysis, and Information-Based Complexity.

Friday, June 7, 2013, at 13:30 in S2|15-201
Andreas Döring, University of Oxford, UK
A new kind of logic for quantum systems
Standard quantum logic in the Birkhoff-von Neumann style is plagued by a variety of conceptual and interpretational problems. In this talk, I will show that one can associate a completely different and much better-behaved form of logic to each quantum system. This is based on the so-called topos approach to quantum theory that provides a new notion of generalised state space for quantum systems. Subsets (or rather, subobjects) of this quantum state space form a complete Heyting algebra, and in certain cases a complete bi-Heyting algebra. Such subobjects represent propositions about the quantum system. There is a 'translation map', called daseinisation, from Birkhoff-von Neumann logic to the new form of topos-based logic. I will present some key properties of the topos-based form of logic for quantum systems, including the Heyting and co-Heyting negation, and will consider the role of quantum states as models of this logic.

Friday, May 31, 2013, at 13:30 in S2|15-201
Carsten Rösnick
Closed Sets and Operators thereon: Representations, Computability and Complexity
In the past, many representations (in the sense of Weihrauch's Type-2 Theory of Effectivity, TTE) for subsets S of Euclidean space have been devised and systematically compared to each other regarding their computable equivalence. Fewer authors, however, examined how representations relate complexity-wise. By giving a unified (re-)formulation (that permits to encode further properties of a set, like a bound on its diameter) of four existing representation first, we then systematically compare which pairs of set-representations admit polynomial-time translations, extending upon the aforementioned existing (so far non-uniform) complexity results and proving new ones. We then use these insights to prove how complexity (and even computability) of operators over sets (like intersection and projection) varies depending on the representation of their respective in- and outputs.

WS 2012/13

Friday, March 15, 2013, at 13:30 in S2|15-201
Dr. Jaime Gaspar*
Krivine's classical realisability and the unprovability of the axiom of choice and the continuum hypothesis

Proof interpretations are tools in mathematical logic with a wide range of applications: consistency results, independence results, and extraction of computational content from proofs, just to name a few. They are usually applied only in the context of logic or arithmetic. There is the dream of applying them even to set theory. Now the dream is fulfilled with a novel proof interpretation: Krivine's classical realisability. In this talk I present my personal digest of Krivine's classical realisability and one of its main applications to set theory: set theory ZF plus the statement “there is a set S between N (the set of natural numbers) and R (the set of real numbers) not equinumerous to its cartesian square S x S” is a consistent theory where both the axiom of choice and the continuum hypothesis fail. As a corollary we get two classical and celebrated results: ZF does not prove the axiom of choice nor the continuum hypothesis. This is a short, simple, and sweet talk, focusing on the ideas and setting aside technicalities.

* INRIA Paris-Rocquencourt, pi.r2, Univ Paris Diderot, Sorbonne Paris Cite, F-78153 Le Chesnay, France. Financially supported by the French Fondation Sciences Mathematiques de Paris.

Friday, February 1, 2013, at 13:30 in S2|15-201
Prof. Dr. Martin Ziegler
Real Benefit of Promises and Advice
Promises are a standard way to formalize partial algorithms; and advice quantifies nonuniformity. For decision problems, the latter it is captured in common complexity classes such as P/poly, that is, with advice growing in size with that of the input. We advertise constant-size advice and explore its theoretical impact on the complexity of classification problems -- a natural generalization of promise problems -- and on real functions and operators. Specifically we exhibit problems that, without any advice, are decidable/computable but of high complexity while, with (each increase in the permitted size of) advice, (gradually) drop down to polynomial-time.

Friday, January 25, 2013, at 13:30 in S2|15-201
Johanna Sokoli
On the Complexity of Satisfiability over Vector Product Terms
A real vector product term is a term constructed iteratively by forming vector products of
R^3-valued variables X_1, … X_n. By defining a vector product for elements of the projective space P^2 in an appropriate way one can also define projective vector product terms. Given a set of vector product terms the question arises whether a single equation or even an entire equation system formed by these terms is satisfiable or not. In this talk different definitions of such satisfiable problems are given and an overview of several polynomial reductions is provided. Furthermore, the languages are classified into a complexity class and a proof of completeness of some of these languages will be given.

Tuesday, January 15, 2013, at 16:00 in S2|15-201
Keita Yokoyama, Tokyo Institute of Technology
Combinatorial principles and Reverse Mathematics
In the Reverse Mathematics program, many mathematical theorems are classified into typical subsystems of second-order arithmetic called “big five”. Recently, combinatorial principles are focused on in this program, since many of them are not classified into big five. In this talk, I will discuss on the role of Ramsey's theorem in Reverse Mathematics and related topics.

Friday, December 14, 2012, at 15:00 in S2|15-201
Dr Kord Eickmeyer, National Institute of Informatics, Tokyo
Order-Invariant Logics on Restricted Classes of Structures
Order-invariant extensions of various classical logics pop up naturally in descriptive complexity and database theory. Formulas of these logics may speak about a linear order on structures, but have to be invariant under the particular choice of such an order. Strictly speaking, this does not even define a logic, because in general it is undecidable whether a given formula has this invariance property or not, and this reflects in the failure of many powerful tools from finite model theory, such as Ehrenfeucht-Fraïssé games.
In my talk I will review known results about the expressiveness and model-checking complexity of order- and successor-invariant first-order logics. I will then go on to talk about my ongoing research (with Michael Elberfeld) about the expressive power of order-invariant logics on restricted classes of structures.

Tuesday, December 4, 2012, at 16:15 in S2|15-201
Dr. Tomer Kotek, Haifa, Israel
Applications of logic in graph theory: definability of graph invariants
Logical methods can be used to unify results for individual graph- theoretic
objects, by providing meta-theorems which apply to all such objects
definable in an appropriate logic. In this talk we discuss graph properties
and graph polynomials definable in First Order Logic and Monadic Second
Order Logic in terms of their definability and combinatorial meta-theorem.
In particular, we show a method of proving non-definability using connection
matrices. We extend the results for graph properties by showing a
Feferman-Vaught type theorem for First Order Logic extended with counting
quantifiers.

Wednesday, November 28, 2012, at 13:30 in S2|15-201
Rasmus Møgelberg, ITU Copenhagen, Denmark
Presheaf models for guarded recursion

A guarded recursive definition is one where the recursion variable only occurs in positions guarded by a delay operator. In this talk I will present a new model of guarded recursion and motivate it using two different examples: first as a model for computations with coinductive types such as streams, and second as a model of a metalanguage useful for defining models of programming languages with higher-order store. The model is a simple presheaf model (over the ordered natural numbers) and I will describe the computational intuition. If I have time I will describe work in progress on constructing models of intentional type theory with guarded recursion. Most of the talk describes joint work with Lars Birkedal, Kristian Stoevring and Jan Schwinghammer published at LICS 2011.

November 23, 2012, at 13:30 in S2|15-201
Makoto Fujuwara, Tohoku University, Japan
Marriage Theorem for Countable Graphs and Computability
We investigate so-called marriage theorem with respect to computability by making use of the technique of reverse mathematics. The marriage theorem has a form that a marriage problem (bipartite graph) satisfying appropriate conditions has a solution (injection). A computable marriage problem may not have a computable solution, but if it has strengthened conditions, it become to have a computable solution. Based on this fact, we investigate how the computational defficulty of marriage problems vary by adding extra information step by step. In the second part of this talk, we consider a new combinatorial condition which also make a computable marriage problem to have a computable solution. However, unlike the previous case, there is no uniform procedure to construct a solution from a given computable marriage problem with this condition. On the other hand, if we have little more information for the marriage problem, we can construct a solution uniformly.

November 16, 2012, at 13:00 in S2|15-201
PD Dr. Olaf Beyersdorff, University of Leeds, UK
How difficult is it to verify proofs?
Traditionally proof complexity adopts the view that proofs are verified in polynomial time and a rich body of results is known for the complexity of proofs in this framework. The talk will start by providing an introduction to main notions and motivations in proof complexity in this classical setting. The second part of the talk is devoted to a recent line of research trying to understand the power of proof systems that use alternative computational resources to verify proofs. In the talk I will mention recent results where proofs are verified by either additional computational resources (such as advice, randomness, quantum computations) or by very weak computational models. In particular, we will look at a very weak model where proofs are verified by NC^0 circuits and show non-trivial lower and upper bounds for this restrictive model.

November 9, 2012, at 13:00 in S2|15-201
PD Dr. Laurentiu Leustean, Institute of Mathematics of the Romanian Adacemy, Bucharest, Romania
Proof Mining in Nonlinear Analysis
The program of proof mining is concerned with the extraction of hidden finitary and combinatorial content from proofs that make use of highly infinitary principles. This new information can be both of quantitative nature, such as algorithms and effective bounds, as well as of qualitative nature, such as uniformities in the bounds or weakening the premises. Thus, even if one is not particularly interested in the numerical details of the bounds themselves, in many cases such explicit bounds immediately show the independence of the quantity in question from certain input data. In this talk we present recent applications of proof mining in nonlinear analysis: -effective results on the asymptotic behavior of Picard iterations of firmly nonexpansive mappings, a class of functions which play an important role in metric fixed point theory and optimization due to their correspondence with maximal monotone operators (joint work with David Ariza and Genaro Lopez-Acedo). – explicit and highly uniform rates of metastability (in the sense of Tao) on a nonlinear generalization of the classical von Neumann mean ergodic theorem, due to Shioji andTakahashi for Banach spaces with a uniformly Gateaux differentiable norm (joint work with Ulrich Kohlenbach).

November 2, 2012, at 13:00 in S2|15-201
Prof. Dr. Martin Otto
Groupoids and Hypergraphs
We look at a novel construction of finite hypergraphs
of controlled acyclicity, based on finite groupoids with suitable acyclicity properties: their Cayley graphs have large girth even w.r.t. a discounted distance measure that contracts arbitrarily long sequences of edges from the same colour class (sub-groupoid), and only counts transitions between colour classes (cosets). These groupoids are shown to be useful, e.g., in the construction of generic finite hypergraph covers that avoid small cyclic configurations. The hypergraph constructions in question use reduced products with groupoids that are generated by the elementary local extension steps and can be made to preserve the symmetries of the given overlap pattern.

October 26, 2012, at 13:00 in S2|15-201
Reiner Hähnle
Abstract Symbolic Execution

Modern software tends to undergo frequent requirement changes and typically is deployed in many different scenarios. This poses significant challenges to formal software verification, because it is not feasible to verify a software product from scratch after each change. It is essential to perform verification in a modular fashion instead. The goal must be to reuse not merely software artifacts, but also specification and verification effort. In our setting code reuse is realized by delta-oriented programming, an approach where a core program is gradually transformed by code “deltas” each of which corresponds to a product feature. The delta-oriented paradigm is then extended to contract-based formal specifications and to verification proofs. As a next step towards modular verification we transpose Liskov's behavioural subtyping principle to the delta world. Finally, based on the resulting theory, we perform a syntactic analysis of contract deltas that permits to automatically factor out those parts of a verification proof that stays valid after applying a code delta. This is achieved by a novel verification paradigma called “abstract symbolic execution.”

October 26, 2012, at 10:30 in S2|15-201
Viktor Winschel
Coalgebraic Analysis of Subgame-perfect Equilibria in Infinite Games without Discounting – Towards Reflexive Economics

We present a novel coalgebraic formulation of infinite extensive games. We define both the game trees and the strategy profiles by possibly infinite systems of corecursive equations. Certain strategy profiles are proved to be subgame perfect equilibria using a novel proof principle of predicate coinduction which is shown to be sound by reducing it to Kozen's metric coinduction. We characterize all subgame perfect equilibria for the dollar auction game. The economically interesting feature is that in order to prove these results we do not need to rely on continuity assumptions on the payoffs which amount to discounting the future. In particular, we prove a form of one-deviation principle without any such assumptions. This suggests that coalgebra supports a more adequate treatment of infinite-horizon models in game theory and economics. I will also sketch the possibilities that this (and more generalized) mathematics opens up for reflexive economics, which takes it serious that modeling in social science takes place within the modeled system and that essentially depends on self-referential paradoxes and corecursion and similar mathematical constructions. This opens the door to a proper treatment of the Lucas critique, evolutionary, institutional, agend based, computational, numerical economics, econometrics of non experimental data, money, value theory, political economics and management theory and also to a unification and cross fertilization between systems sociology, computer science and biology.

Thursday, October 11, 2012, at 13:00 in S2|15-201
Stéphane Le Roux
From Determinacy to Nash equilibrium
Game theory is a modelling tool for economics and Nash equilibrium is its central concept, involving many players and many possible outcomes. Independently, logic has been using a game-theoretic terminology and determinacy is its central concept, involving two players and two possible outcomes. I will show that all determinacy results can be uniformly transferred into a result of existence of multi-outcome (possibly multi-player) Nash equilibrium, thus establishing a two-way bridge between logic and economics.

2012, January to September


September 21, 2012, at 13:00 in S2|15-201
Arno Pauly, Cambridge University
Synthetic Descriptive Set Theory

September 7, 2012, at 13:00 in S2|15-201
Pavol Safarik, TU Darmstadt
A functional interpretation for nonstandard arithmetic
We introduce constructive and classical systems for nonstandard arithmetic and show how variants of the functional interpretations due to Gödel and Shoenfield can be used to rewrite proofs performed in these systems into standard ones. These functional interpretations show in particular that our nonstandard systems are conservative extensions of E-HA\omega; and E-PA\omega;, strengthening earlier results by Moerdijk and Palmgren, and Avigad and Helzner. We will also indicate how our rewriting algorithm can be used for term extraction purposes. To conclude the paper, we will point out some open problems and directions for future research, including some initial results on saturation principles.

August 24, 2012, at 13:00 in S2|15-201
Tahereh Jafarikhah, University of Tarbiat Modares, Tehran, Iran; at present University of Hagen, Germany
Computable Riesz representation on the dual of C[0; 1] revisited
By the Riesz representation theorem, for every linear functional F : C[0; 1] ! R there is a function g : [0; 1] ! R of bounded variation such that
F(h) = Z f dg (h 2 C[0; 1]) :
A computable version is proved in LW07a [?]: a function g can be computed from F and its norm, and F can be computed from g and an upper bound of its total variation. In this article we present a much more transparent proof. We first give a new proof of the classical theorem from which we then can derive the computable version easily. As in LW07a [?] we use the framework of TTE, the representation approach for computable analysis, which allows to de?ne natural concepts of computability for the operators under consideration.

August 24, 2012, at 10:00 in S2|15-201
Hiroyuki OTA, University of Tokyo, Japan
Computational Complexity of Smooth Differential Equations
The computational complexity of the solution~$h$ to the ordinary differential equation $h(0)=0$, $h'(t) = g(t, h(t))$
under various assumptions on the function $g$ has been investigated in hope of understanding the intrinsic hardness of solving the equation numerically.
Kawamura showed in 2010 that the solution~$h$ can be PSPACE-hard even if $g$ is assumed to be Lipschitz continuous and polynomial-time computable.
We place further requirements on the smoothness of $g$
and obtain the following results:
the solution~$h$ can still be PSPACE-hard
if $g$ is assumed to be of class $C^1$;

Tuesday, July 17, 2012, at 11:00 in S2|15-401
Prof. Hans-Peter Künzi, University of Cape Town, South Africa
The Katetov construction revisited
(joint work with M. Sanchis, Castellon, Spain)
Various concepts from the theory of metric spaces are generalized to an asymmetric setting. In particular we study the Katetov construction modified for a T_0-quasi-metric space. Our approach is related to the recent work of the first author with Kemajou and Otafudu about a concept of hyperconvexity in such spaces.

July 6, 2012, at 13:00 in S2|15-201
Prof. Benjamin Miller, University of Münster
An anti-basis theorem for definable cardinals
We will discuss how local rigidity properties of the usual action of SL_2(Z) on R^2 can be used to rule out basis theorems near the base of the definable cardinal hierarchy. I hope to make the talk accessible to a broad audience. This is joint work with Clinton Conley.

June 29, 2012, at 13:00 in S2|15-201
Prof. Martin Otto, TU Darmstadt
Pebble Games and Linear Equations (joint work with Martin Grohe)
We give a new, simplified and detailed account of the correspondence between levels of the Sherali-Adams relaxation of graph isomorphism and levels of pebble-game equivalence with counting (higher-dimensional Weisfeiler-Lehman colour refinement). The correspondence between basic colour refinement and fractional isomorphism (Ramana, Scheinerman, Ullman 1994) is re-interpreted as the base level of Sherali-Adams and generalised to higher levels in (Atserias, Maneva 2011), who prove that the two resulting hierarchies interleave. In carrying this analysis further, we give (a) a precise characterisation of the level k Sherali-Adams relaxation in terms of a modied counting pebble game; (b) a variant of the Sherali-Adams levels that precisely match the k-pebble counting game; (c) a proof that the interleaving between these two hierarchies is strict. We also investigate the variation based on boolean arithmetic instead of real/rational arithmetic and obtain analogous correspondences and separations for plain k-pebble equivalence (without counting). Our results are driven by considerably simplied accounts of the underlying combinatorics and linear algebra.

June 22, 2012, at 13:00 in S2|15-201
Atefeh Keshavarzi Zafarghandi, Amirkabir University, Tehran, Iran
Dynamical Systems via Domain Theory
We consider Dynamical Systems from a computational point of view. Then we study the associated dynamical system in extended hyperspaces. We show that attractors of iterated function systems can be obtained as fixed points in these new hyperspaces.

June 15, 2012, at 13:00 in S2|15-201
Achim Blumensath, TU Darmstadt
Recognisability and Algebras of Infinite Trees

June 1, 2012 at 13:00 in S2|15-201
Prof. Andre Nies, University of Auckland, New Zealand
Algorithmic randomness and differentiability
We show that algorithmic randomness notions for a real correspond to differentiability at the real of all functions in an appropriate effective class.

May 18, 2012 at 13:00 in S2|15-201
Andrey Morozov, Sobolev Institute of Mathematics, Novosibirsk
On Sigma-definability of structures over the reals
We give a brief survey of the results on structures Sigma-definable over HF(R) and present some results on the number of non-Sigma-isomorphic presentations of the ordered field R of reals and of its ordering. In particular, we will discuss the question, whether it is possible to use R to 'improve' itself by creating its new classically isomorphic Sigma-copy.

May 4, 2012
Prof. Takakazu Mori, Sangyo University, Kyoto, Japan
Fine Computable Functions – a computability of discontinuous functions-

The computability of functions by Pour-El and Richards requires that Sequential Computability and Effective Continuity. So, a computable function must be continuous. The theory of Walsh series is based on the Fine metric, which is stronger than Euclidean metric. A function is Fine continuous if and only if it is continuous at dyadic irrationals and right continuous at dyadic rationals. So, Rademach functions and Walsh functions are fine continuous. The Fine computability is defined by means of the Fine topology instead of the Euclidean topology.

April 27, 2012 at 13:30 in S2|15-201
Katharina Schade
Proof mining for Halpern iterations in CAT(0) spaces
We examine a convergence result of Halpern iterations in CAT(0) spaces due to Saejung (2010) and explain how to apply proof mining methods to obtain effective uniform rates of metastability as outlined by Kohlenbach and Leustean in 2011. The novelty of this approach lies in the fact that Saejung's original proof used the existence of Banach limits. This requires a substantial reference to the axiom of choice which is not permitted in current metatheorems. We show how one can reduce this reference to fit the requirements of a metatheorem given in Gerhardy-Kohlenbach (2005).

April 20, 2012 at 13:30 in S2|15-201
Samuele Maschio, University of Pavia, Italy
Initial algebras and internal syntax
In category theory, recursive objects are defined using initial algebras. As well as for natural numbers, objects of lists and objects of trees, it is possible to give a notion of *internal theory * in a category using *ad hoc *functors. I will give some examples and I will underline the connection between Goedel coding and the internalization of arithmetic in the syntactic category of Peano arithmetic.

March 23, 2012 at 13:30 in S2|15-201
Costas Poulios, University of Athens, Greece

The fixed point property on tree-like Banach spaces
A Banach space $X$ is said to have the fixed point property if for every weakly compact convex set $K\subseteq X$ and every non-expansive $T:K\to K$ (i.e. $\|Tx-Ty\|\leq \|x-y\|$ for all $x,y\in K$), the map $T$ has a fixed point. This property is closely related to the geometry of the space under consideration. If the geometry of the space is “nice”, i.e. if $X$ possesses normal structure, then $X$ has the fixed point property. On the other hand, a space without normal structure may fail this property (e.g. the space $L_1$) or it may enjoy this property (e.g. the space $c_0$).
In this lecture we will consider the important class of separable Banach spaces not containing $\ell_1$ with non-separable dual. Our purpose is to study these spaces with respect to the fixed point property. In particular, after discussing some known members of this class, we will present a Banach space belonging to the class under consideration which has the fixed point property even though it does not possess normal structure.

March 2, 2012 at 11:00 in S2|15-201
Alexander Kreuzer

Non-principal ultrafilters, program extraction and higher order reverse mathematic
We investigate the strength of the existence of a non-principal ultrafilter over fragments of higher order arithmetic. Let (U) be the statement that a non-principal ultrafilter on N exists and let ACA_0^w be the higher order extension of ACA_0. We show that ACA_0^w + (U) is Pi1_2-conservative over ACA_0^w and thus that ACA_0^w + (U) is conservative over PA. Moreover, we provide a program extraction method and show that from a proof of a strictly \Pi1_2 statement \forall f \exists g A(f,g) in ACA_0^w + (U) a realizing term in G“odel's system T can be extracted. This means that one can extract a term t\in T, such that \forall f A(f,t(f)).

February 10, 2012 at 14:30 in S2|15-201
Benno van den Berg

About the Herbrand Topos

January 27, 2012 at 13:30 in S2|15-201
Vassilis Gregoriades
The complexity of the set of points of continuity of a multi-valued function
Abstract: We consider a notion of continuity concerning multi-valued functions between metric spaces. We focus on multi-valued functions assigning points to nonempty closed sets and we answer the question asked by M. Ziegler (Real Computation with Least Discrete Advice: A Complexity Theory of Nonuniform Computability with Applications to Effective Linear Algebra, to appear in Ann. Pure Appl. Logic) of determining the complexity of the set of points of continuity of such a multi-valued function. We will see that, under appropriate compactness assumptions, the complexity of this set is low in the hierarchy of Borel sets. On the other hand in the general case it might not even be a Borel set. In particular, this notion of continuity is not metrizable in the following sense: if we view a multi-valued function F from X to Y as a single-valued function from X to the family F(Y) of closed subsets of Y, it is not necessarily the case that there is a metrizable topology on F(Y) such that F is continuous at an arbitrary x in the usual sense (of single-valued functions) exactly when it is continuous at x in the sense of multi-valued functions. We will also present some similar results about a stronger notion of continuity, which is closely related to the Fell topology and thus in some cases turns out to be metrizable.

Wednesday, January 18, 2012 at 14:25 in S2|15-201
Jean-Yves Beziau, Federal University of Rio de Janeiro, UFRJ
Visiting Professor University of Munich, LMU
Universal logic: A general completeness theorem
The aim of universal logic is to develop general concepts and proofs for a better understanding of logical systems. After giving a brief overview of universal logic I will present a general completeness theorem linking sequent systems with bivaluations.

January 13, 2012 at 13:30 in S2|15-201
Sam Sanders, Ghent University
Reuniting the antipodes: bringing together Nonstandard Analysis and
Constructive Analysis
Constructive Analysis was introduced by Erret Bishop to identify the `computational meaning' of mathematics. In the spirit of intuitionistic mathematics, notions like `algorithm,' `explicit computation,' and `finite procedure' are central. The exact meaning of these vague terms was left open, to ensure the compatibility of Constructive Analysis with several traditions in mathematics. Constructive Reverse Mathematics (CRM) is a spin-off of Harvey Friedman's famous Reverse Mathematics program, based on Constructive Analysis. Bishop famously derided Nonstandard Analysis for its lack of computational meaning. In this talk, we introduce`?-invariance': a simple and elegant definition of `finite procedure' in (classical) Nonstandard Analysis. Using an intuitive interpretation, we obtain many results from CRM, thus showing that ?-invariance is quite close to Bishop's notion of `finite procedure.'

Tuesday, January 12, 2012 at 11:00 in S2|15-201
Yoshihiro Maruyama, Oxford University
Chu Duality, Born Coalgebras, and Quantum Symmetries
After a brief overview of recent developments of quantum foundations in Oxford, we focus on Abramsky's representations of quantum systems as Chu spaces and as coalgebras, with the purpose of analysing them from the viewpoint of duality theory. We first develop a general theory of dualities between point-free and point-set spaces, based on the formalism of Chu space. The Chu duality theory includes, as a special case, a duality between quantum state spaces and property observables. Building upon the duality, we refine Abramsky's coalgebraic representation of quantum symmetries, so that our (non-fibred) category of coalgebras turns out to be strictly smaller than Abramsky's fibred one; at the same time, however, it is still big enough to represent quantum symmetries.

Logic Seminar 2011


December 9, 2011 at 13:30 in S2|15-201
Jaime Gaspar
The infinite pigeonhole principle

Abstract: The infinite pigeonhole principle is one of the world's most evident statements: if we colour the natural numbers with finitely many colours, then some colour occurs infinitely often. Despite its obviousness, its treatment can be surprisingly challenging. We are going to illustrate this with two case studies: 1. the finitary infinite pigeonhole principles in Tao's programme of finitisations in analysis; 2. the infinite pigeonhole principle in a quantitative version of Hillam's theorem characterising the convergence of fixed point iterations.
* Financially supported by the Portuguese Fundação para a Ciência e a Tecnologia under grant SFRH/BD/36358/2007 co-financed by Programa Operacional Potencial Humano / Quadro de Referência Estratégico Nacional / Fundo Social Europeu (União Europeia).

Tuesday, December 6, 2011 at 13:30 in S2|15-201
Paulo Oliva, Queen Mary University of London
On the restricted form of Spector's bar recursion.

November 11, 2011 at 13:30 in S2|15-201
Feng Shiguang, Sun Yat-Sen University, Guangzhou

This talk is about the expressive power and complexity of second-order extended Horn logic and Krom logic.

Descriptive complexity is a bridge between complexity theory and logic. It asks for the logic resources that are necessary to define the computational resources such as time, space or the amount of hardware that are necessary to decide a property. Horn and Krom logic have their origins in propositional logic. Some researchers extend them to second logic, named SO-HORN and SO-KROM logic. We define revised and extended versions of these logics and get some results. We show that SO-HORN$^r$ and ($\exists$)SO-KROM$^r$ capture P and NL on ordered structures, respectively, and both SO-EHORN and $\Pi^{1}_{1}$-KROM capture co-NP.

November 4, 2011 at 15:00 in S2|15-201
Jaime Gaspar
Goedel functional interpretation
This talk is a one-hour mini-tutorial introducing from scratch the bare-bones of the Goedel functional interpretation.

November 4, 2011 at 13:30 in S2|15-201
Pavol Safarik
A quantitative nonlinear strong ergodic theorem for Hilbert spaces

Abstract: We give a quantitative version of a strong nonlinear ergodic theorem for (a class of possibly even discontinuous) selfmappings of an arbitrary subset of a Hilbert space due to R.~Wittmann and outline how the existence of uniform bounds in such quantitative formulations of ergodic theorems can be proved by means of a general logical metatheorem. In particular these bounds depend neither on the operator nor on the initial point. Furthermore, we extract such uniform bounds in our quantitative formulation of Wittmann's theorem, implicitly using the proof-theoretic techniques on which the metatheorem is based. However, we present our result and its proof in analytic terms without any reference to logic as such. Our bounds turn out to involve nested iterations of relatively low computational complexity. While in theory these kind of iterations ought to be expected, so far this seems to be the first occurrence of such a nested use observed in practice. October 28, 2011 at 13:30 in S2|15-201

October 28, 2011 at 13:30 in S2|15-201
Vassilis Gregoriades
The effective theory on arbitrary Polish spaces

Abstract: We will recall a constructive scheme (which was presented in a previous talk of this seminar) which assigns to every tree T on the naturals a Polish space N(T) which is recursively presented in T. The effective structure of N(T) depends on the combinatorial properties of T, so that for various choices of a recursive tree T the spaces N(T) are not effectively-Borel isomorphic to each other. In particular there are recursively presented Polish spaces which are uncountable sets and yet not effectively-Borel isomorphic to the Baire space. On the other hand every recursively presented Polish space is effectively-Borel isomorphic to one of the spaces N(T), so the latter spaces are the correct effective analogue of the Baire space. We are going to present the recent results concerning the structure of the spaces N(T) and give a characterization of N(T) being effectively-Borel isomorphic to the Baire space in terms of the intrinsic properties of T.

October 7, 2011 at 13:30 in S2|15-201
Takayuki Kihara, Tohoku University
Non-computability of planar continua

Abstract: Every nonempty open set in a computable metric space contains a computable point. In contrast, the Non-Basis Theorem asserts that a nonempty co-c.e. closed set in Cantor space (hence, even in the real line) can avoid any computable points. Recent exciting progress in Computable Analysis naturally raises the question whether Non-Basis Theorems exist for connected co-c.e. closed sets. In this talk, we show global and local non-computability results concerning connectivity, local connectivity, simply connectivity, and contractibility for planar continua.

October 7, 2011 at 10:30 in S2|15-201
Hideki Tsuiki, Kyoto University
Unimodal Maps as Boundary-Restrictions of Two-Dimensional Full-Folding Maps

Abstract: Unimodal Maps as Boundary-Restrictions of Two-Dimensional Full-Folding Maps” Abstract I will start my talk with ideas on computational aspects for topological spaces. Then I will present what I have got for dynamical system through this idea, with an interactive program showing beautiful 2-dimensional figures. It is a generalization of Gray-code and the tent function to other topological spaces and no background knowledge is required to understand it.

September 16, 2011 at 13:30 in S2|15-201
Akitoshi Kawamura, University of Tokyo
Distance k-Sectors and zone diagrams

Abstract: The bisector of two nonempty sets P and Q in a metric space is the set of all points with equal distance to P and to Q. A distance k-sector of P and Q, where k is an integer, is a (k-1)-tuple (C_1, C_2, …, C_{k-1}) such that C_i is the bisector of C_{i-1} and C_{i+1} for every i = 1, 2, …, k-1, where C_0 = P and C_k = Q. This notion, for the case where P and Q are points in Euclidean plane, was introduced by Asano, Matousek, and Tokuyama, motivated by a question of Murata in VLSI design. We prove the existence of a distance k-sector for all k and for every two disjoint, nonempty, closed sets P and Q in Euclidean spaces of any (finite) dimension, or more generally, in proper geodesic spaces (uniqueness remains open). Another generalization of the bisector is the Voronoi diagram, which is the “bisector between n sites”. Likewise one can consider the “trisector between n sites”, leading to the notion of zone diagrams. Asano, Matousek, and Tokuyama proved the existence and uniqueness of a zone diagram for point sites in Euclidean plane, and Reem and Reich showed existence for two arbitrary sites in an arbitrary metric space. We establish (by a simpler proof than Asano et al.'s) the existence and uniqueness of the zone diagram for n positively separated sites in a Euclidean space of arbitrary (finite) dimension. This talk is based on joint works with K. Imai, J. Matoušek, D. Reem and T. Tokuyama.
http://www-imai.is.s.u-tokyo.ac.jp/~kawamura/papers/k-sectors/
http://www-imai.is.s.u-tokyo.ac.jp/~kawamura/papers/zone/

August 5, 2011 at 13:30 in S2|15-201
Akitoshi Kawamura, University of Tokyo
Why Lipschitz Continuous Ordinary Differential Equations are Polynomial-Space Complete

Abstract: How complex can the solution of a differential equation be, comparedto the function used to describe the equation? This question can begiven a precise sense consistent with the standard computational complexity theory. In this talk, I will first explain how we define polynomial-time computability of real functions. Then I will discuss the complexity of the solution of initial value problems of differential equations under various assumptions. In particular, an initial value problem given by a polynomial-time computable, Lipschitz continuous function can have a polynomial-space complete solution. This answers Ko's question raised in 1983. Time permitting, I will also mention what this kind of results about the complexity of possible solutions often imply about the hardness of actually finding the solution when the differential equation is given.

July 15, 2011 at 13:30 in S2|15-201
Jaime Gaspar*
Copies of Classical Logic in Intuitionistic Logic

Abstract: Classical logic is the usual logic in mathematics, and intuitionistic logic is the constructive part of classical logic. Despite their differences, classical logic can be embedded in intuitionistic logic, that is, there are copies of classical logic in intuitionistic logic. The copies found
in the literature are all the same. So, is the copy unique? In this talk we answer this question.

* Financially supported by the Portuguese Fundação para a Ciência e a Tecnologia under grant SFRH/BD/36358/2007.

July 8, 2011 at 13:30 in S2|15-201
Vasco Brattka, Capetown
Computable Analysis in the Weihrauch Lattice

Abstract: We present recent results on the classification of the computational content of theorems in the Weihrauch lattice. This lattice is a refinementof the Borel hierarchy and allows to classify the computational content oftheorems easily. The essential idea is to view theorems as (possiblymulti-valued) functions that transform certain input data into certainoutput data. The question is then which theorems can be computably andcontinuously transformed into each other. So far, theorems such as theBaire Category Theorem, the Banach Inverse Mapping Theorem, theIntermediate Value Theorem, the Hahn-Banach Theorem, theBolzano-Weierstrass Theorem and several other theorems from analysis havebeen classified from this perspective. The results are not only naturalrefinements of what is known from reverse mathematics, but aclassification of the Weihrauch degree of a theorem leads to a relativelycomplete understanding of the respective theorem regarding its uniform andnon-uniform computability properties.

July 7, 2011 at 13:30 in S2|15-201
Victor Poupet, Marseille
An Elementary Construction of an Aperiodic Tile Set

Abstract: Aperiodic tile sets have been widely studied since their introduction in 1962 by Hao Wang. It was initially conjectured by Wang that it was impossible to enforce the aperiodicity of a coloring of the discrete plane Z2 with a finite set of local constraints (there either was a valid periodic coloring or none at all). This would imply that it was decidable whether there existed a valid coloring of the plane for a given set of local rules. This last problem was introduced as the domino problem, and eventually proved undecidable by Robert Berger in 1964. In doing so, Berger produced the first known aperiodic tile set: a set of local constraints that admitted valid colorings of the plane, none of them periodic. Berger's proof was later made significantly simpler by Raphael M. Robinson in 1971 who created the set of Wang tiles now commonly known as the Robinson tile set.

Since then many other aperiodic tile sets have been found, not only on the discrete plane, but also on the continuous plane (the most famous being probably Penrose's “kite and darts” set of two tiles).
In this talk, I will describe yet another construction of an aperiodic tile set. Although the resulting tile set will produce tilings quite similar to those of Robinson's (an infinite hierarchical structure of embedded squares) the local constraints will be presented in a (hopefully) more natural way: we will start from simple geometrical figures and organize them step by step by adding new rules progressively to enforce the global structure.

May 20, 2011 at 13:30 in S2|15-201
Martin Lotz, Edinburgh
Geometry and Complexity in Optimization

Abstract: The computational complexity of optimization problems depends crucially on the geometry of the problem. In this talk we will discuss geometric measures of condition for optimization problems such as linear programming, and show an application to compressed sensing: the problem of finding sparse solutions to under-determined systems of equations in polynomial time.

May 13, 2011 at 13:30 in S2|15-201
Chistian Ikenmeyer, Universität Paderborn
Introduction to Geometric Complextity Theory and Tensor Rank

Abstract: Mulmuley and Sohoni in 2001 proposed to view the P vs NP and related problems as specific orbit closure problems and to attack them by methods from geometric invariant and representation theory. We merge these ideas with work of Strassen towards the goal of showing lower bounds for the computational complexity of matrix multiplication. This talk explains some key ideas of this approach on a basic level.

Tuesday, May 10, 2011 at 13:30 in S2|15-201
Jaap van Oosten, Universiteit Utrecht
Another Heyting Algebra for embedding the Turing degrees

Abstract: Simpson has been arguing in recent years that classical
degree theory is “dead” and can be “redeemd” by studying the Medvedev lattice. The Medvedev lattice embeds the Turing and Muchnik degrees and its dual is a Heyting algebra. The Medvedev lattice is intimately related to the notion of realizability studied by Kleene and Vesley. In this talk another Heyting algebra will be presented in which the Turing degrees embed; the “lattice of local operators in the effective topos”. The structure will be defined in a very concrete way, and some methods for determining equality/ inequality between its members will be discussed.

May 6, 2011 at 13:30 in S2|15-201
Davorin Lesnik
Synthetic topology

Abstract: Synthetic topology is a study of mathematical models, in which objects possess an intrinsic topology in a suitable sense and all maps arecontinuous. Topological definitions and theorems thus becomes statementsabout (internal) logic. The upshot is that this approach gives new insightand topological theorems often become easier to prove, and their validitytransfers to classical topology.


At the seminar we will discuss the origins, basic ideas and notions of
synthetic topology, and some proofs. Depending on time, we'll speak about the logical and computational aspects of the theory.

April 21, 2011 at 14:00 in S2|15-201
Jaime Gaspar
Proof mining Hillam's theorem on fixed point iterations



Abstract: Hillam's theorem gives a characterisation of the convergence of fixed point iterations. We present a quantitative version of Hillam's theorem where we give an explicit rate of metastability (a kind of “finitary rate of convergence”) for the fixed point iteration. We explain how this result is obtained by methods of proof theory used in the proof mining program, called functional interpretations, which extract computational content from proofs.

February 18, 2011 at 14:00 in S2|15-201
Arne Seehaus
Ist Mathematik “(Big) Science” im Sinne von Thomas Kuhn und Derek John de Solla Price?

February 11, 2011 at 14:00 in S2|15-201
Alexander Kartzow
First-order logic on generalisations of pushdown graphs

January 28, 2011 at 14:00 in S2|15-201
Martin Ziegler
Relative Computability and Uniform Continuity of Relations



Abstract: In their search for a topological characterization of relatively (i.e. oracle-) computable relations (aka multivalued functions), Brattka and Hertling (1994) have considered two notions: weak continuity (which is weaker than relative computability) and strong continuity (which is stronger than relative computability). Based on the Henkin quantifier, we propose a notion of uniform continuity and explore its relations to weak/strong continuity as well as to relative computability.
January 21, 2011 at 14:00 in S2|15-201
Takako Nemoto, Bern
Determinacy and reverse mathematics



Abstract: It is known that determinacy of infinite games characterize many subsystems of second order arithmetic.

In this talk, we treat relatively stronger systems, beyond so-called “big five systems”, and consider the possibility of determinacy as a tool to investigate strong systems of second order arithmetic.

January 14, 2011 at 14:00 in S2|15-201
Isolde Adler, Frankfurt
Nowhere dense graph classes, stability, and the independence property

Abstract: Recently, Nešetřil and Ossona de Mendez introduced nowhere dense classes of finite graphs, a generalisation of many natural and important graph classes such as graphs of bounded degree, planar graphs, graphs excluding a fixed minor and graphs of bounded expansion. These graph classes play an important role in algorithmic graph theory, as many computational problems that are hard in general become tractable when restricted to such classes.


We observe that the notion of nowhere density is essentially the earlier stability theoretic notion of superflatness, introduced by Podewski and Ziegler in 1978. With this, we show that for a subgraph-closed class C of (not necessarily finite) graphs, the following are equivalent:

1) C is nowhere dense
2) C is stable
3) C is dependent

The result can be generalised to classes of coloured digraphs. It fails if C is not subgraph-closed. For arbitrary graph classes, dependence is the most general notion, generalising both nowhere density and stability, as well as (locally) bounded clique-width.

This is joint work with Hans Adler.

November 19, 2010 at 14:00 in S2|15-201
Dr. Vadim Puzarenko, Russian Academy of Science, Novosibirsk
A reducibility on structures
Abstract:
November 12, 2010 at 14:00 in S2|15-201
Carsten Rösnick
Approximate Real Function Maximization and Query Complexity

Abstract: Let MAX,INT : C[0,1] -> R be the functionals for maximization and integration, respectively. I.e.,

MAX(f) = sup_{0<x<1} f(x) , INT(f) = int_01 f(t) dt .


It has been shown by Friedman and Ko (1982) that for the restricted an be approximated up to an error 2^{-n} on polynomial-space in n. In above's framework of computation, each computable real function f : [0,1] -> R is essentially represented as a type-2 function (in the sense of Weihrauch), mapping a rational sequence to a rational sequence, i.e., f \in (N->Q) -> (N->Q). This enables us to discuss the computational complexity of a real-valued function on classical Turing machines with oracle access. Here, a TM M computing f is presented with an oracle phi \in (N->Q) for x in [0,1] and a precision n in N, such that M produces the output M^\phi(n) \in Q with


|M^\phi(n) – f(x)| < 2^{-n}.


When it comes to the computation of the functionals MAX and INT, the encoding of an input for a Turing machine becomes even more critical: Instead of representing an x \in [0,1] by a fast converging sequence \phi \in (N->Q), we now have to encode a continuous real-valued function f : [0,1] -> R. The first goal of this thesis is to devise, compare and explore different ways of accessing an unknown continuous real function via black-box queries that model the actual computation of f as supported by certified numerics like interval arithmetic or the iRRAM. Building on this, the second goal of this Master's Thesis is to determine the query complexity of the MAX-functional with respect to different protocols. In combination with an complexity analysis of the protocols themselves, this should yield more explicit quantitative and parameterized both upper and lower bounds (compared to those presented in the beginning) on the complexity of MAX (and INT).


Logic Seminar 2009, 2010

August 6, 2010 at 14:00 in S215/201
René Hartung

Computation with infinitely presented groups

Abstract: We give an overview of the current methods for computing with a class of infinitely presented groups, the so-called finitely $L$-presented groups. Theses infinite presentations generalize finite presentations and they allow interesting applications. In particular, our methods allow to compute the lower central series quotients and the Dwyer quotients of the Schur muliplier.





July 13, 2010 at 14:00 in S215/201
Russell Miller, New York City University

Computable fields and their algebraic closures


Abstract: We present recent results about computable fields F, especially those algebraic over their prime subfields, and embeddings of them into their algebraic closures. The discussion begins with the basic definitions and a review of Rabin's Theorem, and continues with more exact computability-theoretic comparisons, by Steiner and by the speaker, among the set of irreducible polynomials in F[X], the set of polynomials there with roots in F, and the image of F within computable presentations of its algebraic closure. Finally, we will discuss the Blum-Shub-Smale model of computation on the real numbers, and explain how it changes the array of tools available for examination of these questions. back to the top of the page
June 25, 2010 at 14:00 in S215/201
Jaime Gaspar, TU Darmstadt

Proof interpretations with truth
Abstract: A proof interpretation I of a theory S in a theory T is a function mapping
a formula A of S to a formula AI(x) of T (with distinguished variables x)
verifying S ` A ) T ` AI(t) for some terms t extracted from a proof of A. Proof
interpretations are used in:
• consistency results (e.g., if ?I = ?, then S ` ? ) T ` ?, i.e., S is consistent
relatively to T);
• conservation results (e.g., if T ` AI ! A for ?02
formulas A, then S ` A )
T ` A, i.e., S is conservative over T for ?02
formulas);
• unprovability results (e.g., if there are no terms t such that T ` AI(t), then
S 0 A);
• closure under rules (e.g., if S = T, (9yA(y))I(x) = AI(x) and T ` AI(x) ! A(x),
then T ` 9yA(y) ) T ` A(t) for some term t, i.e., T has the existence property);
• extracting computational content from proofs (e.g., extracting t in the previous
point).
Closure under rules needs T ` AI(x) ! A that can be achieved by:
• upgrading T to the characterization theory CT that proves 9xAI(x) $ A;
• or hardwiring truth in I obtaining It verifying T ` AIt(x) ! A.
The first option doesn’t work if:
• CT is classically inconsistent (e.g., bounded proof interpretations);
• or we want applications to theories weaker than CT.
So we turn to the second option and present a method to hardwire truth (in proof
interpretations of Heyting arithmetic satisfying some mild conditions) by defining:
• a function c that replaces each subformula A of a formula by A^Ac where Ac
is a “copy” of A;
• an “inverse” function c−1 that replaces the “copies” Ac by the “originals” A;
• It = c−1 ? I ? c.
As examples we hardwire truth in:
• modified realizability;
• Diller-Nahm functional interpretation;
• bounded modified realizability;
• bounded functional interpretation;
• slash.

This is based on a joint work with Paulo Oliva. back to the top of the page
June 18, 2010 at 14:00 in S215/201
Prof. Hajime Ishihara, Japan Advanced Institute for Science and Technology

The monotone completeness theorem in constructive reverse
Abstract: We investigate the monotone completeness theorem: MCT. Every bounded monotone sequence of real numbers converges within a subsystem of elementary analysis EL as a formal framework of constructive reverse mathematics aiming at classifying various theorems in intuitionistic, constructive recursive and classical mathematics. We distinguish a Cauchy sequence with modulus from a Cauchy sequence without modulus.

We give a logical principle which is stronger than LPO, and show that it is equivalent to the statement:


(1) Every bounded monotone sequence of real numbers is a Cauchy sequence without modulus.


We also give a function existence axiom which is weaker than the number-number choice for Pi0_1 formulas, and show that it is equivalent to the statement:


(2) Every Cauchy sequence without modulus is a Cauchy sequence with modulus.

Therefore MCT is divided into the logical statement (1), the function existence statement (2), and the Cauchy completeness of the reals which is constructively provable without countable choice.

back to the top of the page
June 16, 2010 at 14:00 in S215/201
Michal Stronkowski, Karls Universität Prag

Finite axiomatization of quasivarieties of relational structures
Abstract: Much attention was paid to the finite axiomatization of varieties and quasivarieties of algebras in the past. But the problem for relational structures was appearing sporadically. I will review what is known about this and add one new theorem. Namely, we prove that the graphs of monoids or groups (as relational structures) almost always generate non finitely axiomatizable quasivarieties.

June 15, 2010 at 14:00 in S215/201
Prof. Jimmie Lawson, Louisiana State University, Baton Rouge
Clifford Algebras, Möbius Transformations, Vahlen Matrices, and Bruck Loops


June 4, 2010 at 14:00 in S215/201
Prof. Klaus Meer, Brandenburgische TU Cottbus

On the expressive CNF formulas of bounded Tree- and Clique-Width

Abstract: We study the expressive power of certain families of polynomials over a field K. The latter are defined via Boolean formulas in conjunctive normal form. We attach in a natural way a graph to such a formula and study the resulting polynomial families when certain parameters of that graph like tree- and clique-width are assumed to bounded. back to the top of the page
May 21, 2010 at 14:00 in S215/201
Priv.-Doz. Dr. Norbert Müller, Universität Trier

Exakte reelle Arithmetik – reelle Zahlen aus der Sicht der Informatik
Abstract: Die Menge der reellen Zahlen ist, anders als etwa die viel leichter algorithmisch greifbaren natürlichen Zahlen, überabzählbar. Als Konsequenz ist bereits die Definition des Begriffs “Berechenbarkeit” für reelle Zahlen deutlich komplexer als die klassische Berechenbarkeitsdefinition für natürliche Zahlen mit Turingmaschinen.

Implementierungen dieses Berechenbarkeitsbegriffes werden oft als “exakte reelle Arithmetik” bezeichnet; Basis ist dabei in der Regel eine Intervallarithmetik mit mehrfach genauen Zahlen, ergänzt um eine automatisierte Genauigkeitssteuerung. Damit sind numerische Algorithmen möglich, die in der endlichen Welt der double-precision-Zahlen nicht sinnvoll wären.
Im Vortrag stellen wir einige der Prinzipien einer solchen Implementierung vor und werden dabei insbesondere als Beispiele ihre Anwendung (und ihr Verhalten) bei der “logistic map” und einfachen Differentialgleichungen betrachten. back to the top of the page

February 12, 2010 at 14:00 in S215/201
Jaime Gaspar, TU Darmstad

Slash and completeness


Abstract: Gödel's incompleteness theorems gave a fatal blow in Hilbert's program. Despite this, there were some attempts to salvage the program. One of them, by Hilbert himself, was to add omega-rule (if A(n) is provable for each fixed n, then “for all n, A(n)” is provable) to Peano arithmetic. This rule evades Gödel's incompleteness and gives a complete arithmetical theory. Proof interpretations are proof-theoretic tools to prove consistency results, conservation results, closure under rules and to extract computational content from proofs. The slash is a very simple toy case of a proof interpretation. However, the omega-rule boosts the slash, resulting in some interesting applications. We will talk about one of them: a very simple proof of the completeness of Peano arithmetic plus omega-rule. back to the top of the page

POSTPONED!

Martin Otto, TU Darmstadt
Acyclicity in hypergraph covers and applications


Abstract: Which degrees of acyclicity can be achieved in FINITE hypergraph covers? While actual acyclicity can always be had in generally infinite unfoldings, finite covers can only guarantee certain bounded forms of acyclicity at best. I want to discuss new constructions of weakly N-acyclic covers (joint work with Vince Barany and Georg Gottlob, Oxford) and N-acyclic covers with a view to their use in the finite model theory of guarded logics. back to the top of the page

January 29, 2010 at 14:00 in S215/201
Martin Ziegler, TU Darmstadt

Expressiveness and Computational Complexity of Euclidean Quantum Logic


Abstract: Quantum logic generalizes, and in dimension one coincides with, Boolean logic. The satisfiability problem of quantum logic formulas is shown NP-complete in dimension two. We then extend these considerations to three and higher-dimensional Euclidean spaces R^d and C^d. For fixed d>2, quantum satisfiability turns out polytime-equivalent to the real feasibility of a multivariate quartic polynomial equation: a well-known problem lying (probably strictly) between NP and PSPACE. We finally investigate the problem over INdefinite finite dimensions and relate it to the real feasibility of quartic NONcommutative *-polynomial equations. back to the top of the page

November 27, 2009 at 14:00 in S215/201
Peter Scheiblechner, Purdue

Comparison of Complexity over the Real vs. Complex Numbers


Abstract: We compare the complexity of certain problems over the reals with their corresponding versions over the complex numbers. We argue in particular that the problem of deciding whether a system of (real) polynomial equations has a real solution is strictly harder than asking for a complex solution, since it cannot be solved by a complex BSS-machine. On the other side we describe a family of problems, whose real versions are in the model of algebraic decision trees strictly easier than its complex versions. For this end we prove a new lower bound for the decision complexity of a closed algebraic set X in C^n in terms of the sum of its (compactly supported) Betti numbers b_c(W), which is for the first time better than logarithmic in b_c(W).

back to the top of the page

November 13, 2009 at 14:00 in S215/201
Matthias Schroeder, Munich

Topology in computable analysis


Abstract: Computable Analysis studies computability over the real numbers and related spaces. Since computable functions are continuous in some sense, topology plays an important role in this theory. The cartesian closed category QCB of quotients of countably based topological spaces turns out to be an appropriate category for modelling the topological aspects of higher type computation.


In this talk, we discuss topological properties of the Kleene-Kreisel continuous functionals and their surprising relationship to an open problem in exact real-number computation. This problem is whether two approaches in functional programming on higher type computability over the reals coincide.


We show that the sequential topology on the space N^(N^N), which is the space of Kleene-Kreisel continuous functionals of type 2, is neither zero-dimensional nor regular. Moreover, we establish non-regularity of further function spaces formed in the cartesian closed category of sequential spaces. This is one of the categories from which QCB inherits its cartesian closed structure. back to the top of the page

November 6, 2009 at 14:00 in S215/201
Stephane Le Roux, LIX, Ecole Polytechnique

Sequential game theory: a formal and abstract approach


Abstract: In a game from game theory, a Nash equilibrium (NE) is a combination of one strategy per agent such that no agent can increase his payoff by unilaterally changing his strategy. Kuhn proved that all (tree-like) sequential games have NE. Osborne and Rubinstein abstracted over these games and Kuhn's result: they proved a *sufficient* condition on agent *preferences* for all games to have NE. I will present a *necessary and sufficient* condition, thus accounting for the game-theoretic frameworks that were left aside. The proof is *formalised* using Coq, and contrary to usual game theory it adopts an inductive approach to trees for definitions and proofs. By rephrasing a few game-theoretic concepts, by ignoring useless ones, and by characterising the proof-theoretic strength of Kuhn's/Osborne and Rubinstein's development, the talk also intends to clarifies sequential game theory.


Then I will sketch an alternative proof. Both proofs so far pre-process the preferences by topological sorting, which is surprising since it decreases the number of NE. Finally I will sketch two other proofs that pre-process the preferences only by transitive closure. The four proofs are all constructive and by induction, and I will compare them when seen as algorithms. The four proofs also constitute diverse viewpoints on the same concepts, the trial of diverse techniques against the same problem, and reusable techniques for similar structures.

back to the top of the page

October 9, 2009 at 14:00 in S215/201
Jaime Gaspar, TU Darmstadt

Proof interpretations with truth


Abstract: Proof interpretations are proof-theoretic tools to extract computational content (such as computable choice functions) from proofs. In some cases this only works (at least without strengthening the theories involved) for proofs of theorems of a particular logical form, but we wish to extract computational content from proofs of arbitrary theorems. Interpretations with truth have been developed to cope with this. We present two new methods on how to do hardwire truth and apply them to new proof interpretations.


In a first part we make a brief introduction to proof interpretations and the extraction of computational content from proofs. Then in a second part we present our two methods. Finally in a third part we apply them in some case studies.


This is joint work with Paulo Oliva. back to the top of the page

September 18, 2009 at 14:00 in S215/201
Martin Ziegler,TU Darmstadt

Real Computation with Least Discrete Advice: A Complexity Theory of Nonuniform Computability


Abstract: It is folklore particularly in numerical and computer sciences that, instead of solving some general problem f:A->B, additional structural information about the input x in A (that is any kind of promise that x belongs to a certain subset A' of A) should be taken advantage of. Some examples from real number computation show that such discrete advice can even make the difference between computability and uncomputability. We turn this into a both topological and combinatorial complexity theory of information, investigating for several practical problems how much advice is necessary and sufficient to render them computable. Specifically, finding a nontrivial solution to a homogeneous linear equation A*x=0 for a given singular real NxN-matrix A is possible when knowing rank(A)=0,1,…,N-1; and we show this to be best possible. Similarly, diagonalizing (i.e. finding a BASIS of eigenvectors of) a given real symmetric NxN-matrix is possible when knowing the number of distinct eigenvalues: an integer between 1 and N (the latter corresponding to the nondegenerate case). And again we show that N-fold (i.e. roughly log N bits of) additional information is indeed necessary in order to render this problem (continuous and) computable; whereas for finding SOME SINGLE eigenvector of A, providing the truncated binary logarithm of the least-dimensional eigenspace of A--i.e. Theta(log N)-fold advice--is sufficient and optimal. back to the top of the page

September 4, 2009 at 14.00 in S215/201
Vassilis Gregoriades, Athens

Effective Descriptive Set Theory and Applications in Analysis


Abstract: Descriptive Set theory is the area of mathematics which studies the structure of definable sets in separable and complete metric spaces. A central part of this area is Effective Descriptive Set Theory (for shortness Effective Theory) which uses ideas from Logic and in particular from Recursion Theory. This area provides a deeper understanding of the subject and has applications in Analysis. We begin this lecture with a brief summary of the basic concepts and we give a new approach to Effective Theory which yields some surprising results. Then we present some connections with General Topology; all theorems we talk about involve notions from both areas. Finally we give some results in Banach space Theory. Here all theorems involve only classic (i.e., non-effective) notions but still Effective Theory is essential for their proof. back to the top of the page

August 14, 2009 at 14:00 in S215/201
Richard Garner, Cambridge

Ionads: a generalised notion of topological space

July 17, 2009 at 14:00 in S215/201
Martin Otto, TU Darmstadt

Acyclicity in hypergraph covers

July 10, 2009 at 14:00 in S215/201
Pavol Safarik, TU Darmstadt

On the Computational Content of Bolzano-Weierstraß


Abstract: We will apply the methods developed in the field of 'proof mining' to the Bolzano-Weierstraß theorem BW and calibrate the computational contribution of using this theorem in a proof of a combinatorial statement. Using a simple proof based on weak König's lemma and arithmetical comprehension we provide an explicit solution of the Gödel functional interpretation as well as the monotone functional interpretation of BW for the product space \prod_i \in N [-k_i, k_i] for a given sequence of rational numbers (k_i) \subseteq Q (with the standard product metric). In fact, we will use it to get optimal program and bound extraction theorems for proofs based on fixed instances of BW, i.e. for BW applied to fixed sequences in \prod_i \in N [-k_i, k_i]. back to the top of the page

July 3, 2009 at 14:00 in S215/201
Anuj Dawar, Cambridge

Logics with Rank Operators


Abstract: It is a long-standing open problem in descriptive complexity theory whether there is a logic that expresses exactly the polynomial-time decidable properties of unordered structures. It has been known since the early 90s that fixed-point logic with counting (FPC) is not sufficient and there have been several proposals for more expressive logics. Taking our inspiration from recent results that show that natural linear-algebraic operations are not definable in FPC, we propose FPR – an extension of fixed-point logic with an operator for matrix rank. We show that this is strictly more expressive than FPC and can express the polynomial-time problems that have been shown to be inexpressible in FPC. We also show that FO+R, the extension of first-order logic with rank operators is surprisingly expressive. It can express some forms of transitive closure and, on ordered structures, captures the complexity classes ModpL.


In this talk, I will give the history and general background to the question and explain the context of the new results. I will also point to several interesting open problems. (Joint work with Martin Grohe, Bjarki Holm and Bastian Laubner).