**Date**: The 28th of March, from 9:30am to 5pm. Just before the JNMI.**Location**: INRIA Université de Lille batiment B**Organizer**: Charles Paperman

All the talks will be in english.

9h30

## When Locality Meets Preservation

The Gaifman normal form is extensively used in Finite Model Theory to bypass compactness and quantifier elimination results. We study a restriction of the Gaifman normal form, namely the positive Boolean combinations of basic local sentences, and show that they match exactly the first-order sentences preserved under local elementary embeddings. This result provides a new general preservation theorem and extending the Łós-Tarski Theorem. This full preservation result fails as usual in the finite, and we show furthermore that the naturally related decision problems are undecidable. Nevertheless, we can leverage our understanding of this restriction of the Gaifman normal form to study preservation under extensions, to design new well-behaved classes of structures structures: we show that preservation under extensions holds if and only if it holds locally.10h

## Scalable Anytime Algorithms for Learning Formulas in Linear Temporal Logic

In this talk, I will introduce a novel algorithm of building (learning) an explainable model in the form of Linear Temporal Logic formulas from a set of positive and negative finite system behaviours a.k.a. traces. Linear temporal logic (LTL) has been a prominent logic for specifying temporal properties which is widely used in the field of artificial intelligence, program verification, motion planning in robotics, process mining, and many other areas. Despite a growing interest of the research community, existing solutions suffer from two limitations: they do not scale beyond small formulas, and they may exhaust computational resources without returning any result. Our algorithm addresses both the issues: it is able to construct formulas an order of magnitude larger than previous methods, and it is anytime, meaning that it in most cases successfully outputs a formula, albeit possibly not of minimal size. We have also implemented our algorithm in a prototype named SCARLET and I will also present the evaluation of its performances against publicly available benchmarks.10h30Break

10h50

## Membership problems for Pebble Transducers

Deterministic pebble transducers are two-way transducers enhanced with the ability to drop a bounded number of pebbles on their input. They describe a robust class of string-to-string functions, named “polyregular functions”. Two variants of the model have been recently investigated: marble transducers (where the transducer drops “marbles instead of”pebbles“, with a slightly different semantics) and blind transducers (also named”comparison-free pebble transducers" in the literature). This talk surveys some of the recent membership problems concerning these three models. Intuitively, such problems correspond to “program optimization” issues which given a “complex” program, ask whether there exist a “simpler” one which has the same behavior. We shall for instance ask if given a pebble transducer which uses k pebbles, there exists an equivalent transducer using only k-1 pebbles. Similar questions can be formulated for marble or blind transducers. We can also ask if a pebble transducer can be transformed in a blind or a marble transducer. Some of these problems have recently been solved, while others are still open.11h20

## Parameterized Analysis of Reconfigurable Broadcast Networks

Reconfigurable broadcast networks (RBN) are a model of distributed computation in which agents can broadcast messages to other agents using some underlying communication topology which can change arbitrarily over the course of executions. We conduct parameterized analysis of RBN. We consider cubes, (infinite) sets of configurations in the form of lower and upper bounds on the number of agents in each state, and we show that we can evaluate boolean combinations over cubes and reachability sets of cubes in PSPACE. In particular, reachability from a cube to another cube is a PSPACE-complete problem. To prove the upper bound for this parameterized analysis, we prove some structural properties about the reachability sets and the symbolic graph abstraction of RBN, which might be of independent interest. We justify this claim by providing applications of these results. First, we show that the almost-sure coverability problem is PSPACE-complete for RBN, thereby closing a complexity gap from a previous paper. Second, we define a computation model using RBN, à la population protocols, called RBN protocols. We characterize precisely the set of predicates that can be computed by such protocols.11h50

## The Regular Languages of First-Order Logic with One Alternation.

The regular languages with a neutral letter expressible in first-order logic with one alternation are characterized. Specifically, it is shown that if an arbitrary Σ2 formula defines a regular language with a neutral letter, then there is an equivalent Σ2 formula that only uses the order predicate. This shows that the so-called Central Conjecture of Straubing holds for Σ2 over languages with a neutral letter, the first progress on the Conjecture in more than 20 years. To show the characterization, lower bounds against polynomial-size depth-3 Boolean circuits with constant top fan-in are developed. The heart of the combinatorial argument resides in studying how positions within a language are determined from one another, a technique of independent interest.12h20Lunch break @ “Le Labo Restaurant - Ascotel Lille Est Grand Stade”

14h00

## Regular expression search in a stream

Regular expression search in a stream Regular expression search is a key primitive in myriads of applications, from web scraping to bioinformatics. A regular expression is a formalism for compactly describing a set of strings, built recursively from single characters using three operators: concatenation, union, and Kleene star. Two basic algorithmic problems concerning such expressions are membership and pattern matching. In the regular expression membership problem, we are given a regular expression R and a string T, and must decide whether T matches R. In the regular expression pattern matching problem, the task is to find the substrings of T that match R. In this talk, I will give a survey of recent algorithms for regular expression search in the practically relevant streaming setting.15h00Break

15h20

## Different strokes in randomised strategies: Revisiting Kuhn’s theorem under finite-memory assumptions

Two-player (antagonistic) games on (possibly stochastic) graphs are a prevalent model in theoretical computer science, notably as a framework for reactive synthesis. Optimal strategies may require randomisation when dealing with inherently probabilistic goals, balancing multiple objectives, or in contexts of partial information. There is no unique way to define randomised strategies. For instance, one can use so-called mixed strategies or behavioural ones. In the most general settings, these two classes do not share the same expressiveness. A seminal result in game theory - Kuhn’s theorem - asserts their equivalence in games of perfect recall. This result crucially relies on the possibility for strategies to use infinite memory, i.e., unlimited knowledge of all the past of a play. However, computer systems are finite in practice. Hence it is pertinent to restrict our attention to finite-memory strategies, defined as automata with outputs. Randomisation can be implemented in these in different ways: the initialisation, outputs or transitions can be randomised or deterministic respectively. Depending on which aspects are randomised, the expressiveness of the corresponding class of finite-memory strategies differs. In this work, we study two-player turn-based stochastic games and provide a complete taxonomy of the classes of finite-memory strategies obtained by varying which of the three aforementioned components are randomised. Our taxonomy holds both in settings of perfect and imperfect information. This talk is based on joint work with Mickael Randour.15h50

## Characterizing Omega-Regularity through Finite-Memory Determinacy of Games on Infinite Graphs

We consider zero-sum games on infinite graphs, with objectives specified as sets of infinite words over some alphabet of colors. A well-studied class of objectives is the one of omega-regular objectives, due to its relation to many natural problems in theoretical computer science. We focus on the strategy complexity question: given an objective, how much memory does each player require to play as well as possible? A classical result is that finite-memory strategies suffice for both players when the objective is omega-regular. We show a reciprocal of that statement: when both players can play optimally with a chromatic finite-memory structure (i.e., whose updates can only observe colors) in all infinite game graphs, then the objective must be omega-regular. This provides a game-theoretic characterization of omega-regular objectives, and this characterization can help in obtaining memory bounds. Moreover, a by-product of our characterization is a new one-to-two-player lift: to show that chromatic finite-memory structures suffice to play optimally in two-player games on infinite graphs, it suffices to show it in the simpler case of one-player games on infinite graphs. This article is based on joint work with Patricia Bouyer and Mickael Randour and has been accepted to STACS 2022.