Invited speakers

Věra Kůrková
Institute of Computer Science of the Czech Academy of Sciences

Věra Kůrková is a senior scientist at the Department of Machine Learning, Institute of Computer Science of the Czech Academy of Sciences. She received PhD. in mathematics from the Charles University, Prague, and DrSc. (Prof.) in theoretical computer science from the Czech Academy of Sciences. Since 1990 she has been affiliated with the Institute of Computer Science, in 2002-2008 she served as the Head of the Department of Theoretical Computer Science. In 2010, she received the Bolzano Medal for her contribution to mathematical sciences from the Czech Academy of Sciences.

Her main research interests are mathematical theory of neurocomputing and machine learning. Her work includes analysis of capabilities and limitations of shallow and deep networks, dependence of network complexity on increasing dimensionality of computational tasks, connections between theory of inverse problems and generalization in machine learning, and nonlinear approximation theory.

Since 2008, she has been a member of the Board of the European Neural Network Society (ENNS), in 2017-2019 she served as its president. She is a member of the editorial boards of the journals Neural Networks, Neural Processing Letters, and Applied and Computational Harmonic Analysis, and was a guest editor of special issues of Neural Networks and Neurocomputing. She was involved in organization of conferences ICANN in the roles of general chair, co-chair, or honorary chair.

Title of invited lecture:
Some implications of high-dimensional geometry for classification by neural networks

Computational difficulties of multidimensional tasks, called the “curse of dimensionality’’, have long been known.  On the other hand, almost deterministic behaviour of some randomized models and algorithms depending on large numbers of variables can be attributed to the “blessing of dimensionality’’.  These phenomena can be explained by rather counter-intuitive properties of geometry of high-dimensional spaces. They imply concentration of values of sufficiently smooth functions of many variables around their mean values and possibilities of reduction of dimensionality of data by random projections.

In the lecture, it will be shown how these properties of high-dimensional geometry can be employed to obtain some insights into suitability of various types of neural networks for classification of large data sets. Probabilistic bounds on network complexity will be derived using concentration properties of approximation errors based on Azuma and McDiarmid inequalities. Consequences for choice of network architectures will be analyzed in terms of growth functions and VC dimensions of sets of network input-output functions. General results will be illustrated by examples of deep perceptron networks with various piecewise polynomial activation functions (ReLU, RePU).  Connections with the central paradox of coding theory and pseudo-noise sequences will be discussed.

 

Gerth Stølting Brodal
Aarhus University, Denmark

Gerth Stølting Brodal is a Professor at Aarhus University, Denmark. He received his PhD in computer science from Aarhus University in 1997 for a thesis on “Worst Case Efficient Data Structures”. From 1997 to 1998 he was a PostDoc in the group of Kurt Mehlhorn at the Max-Planck-Institute for Computer Science in Saarbrücken, Germany. Since 1998 he has been at Aarhus University, where he 1998–2005 was affiliated with the Center for Basic Research in Computer Science (BRICS) and 2007–2017 with the Center for Massive Data Algorithmics (MADALGO).His main research interests are the design and analysis of algorithms and data structures. He has done work on fundamental data structures, including dictionaries and priority queues, persistent data structures, computational geometry, graph algorithms, string algorithms, I/O-efficient and cache-oblivious algorithms and data structures, algorithm engineering, and computational biology.Title of invited lecture:
Data Structure Design – Theory and PracticeEfficient data structures are central to the design of efficient algorithms. Unfortunately, for a particular abstract data type most often there is not one data structure that is the best across different contexts. In this talk we will discuss different aspects influencing the design of data structures, including the computational model (e.g., the comparison model, pointer machine, RAM, external memory and cache-oblivious models), the choice of programming language (e.g., imperative, purely functional, and interpreted languages) and hardware characteristics (e.g., cache levels, branch predictions, and virtual memory translations).

 

Sławomir Lasota
University of Warsaw

Sławomir Lasota is a professor at the University of Warsaw. He received his PhD in computer science in 2000 from the University of Warsaw, with which he is affiliated until now: since 2002 as an assistant professor, since 2009 as an associate professor and since 2019 as a professor. From 2001 to 2002 he was a postdoc in Laboratoire Spécification et Vérification at ENS Cachan (currently ENS Paris-Saclay). In 2004 he was a postdoc in the University of Bordeaux in 2004, and an invited professor in ENS Cachan.

His main research interests are automata theory, concurrency theory and formal verification. His recent research focuses on automatic analysis of infinite-state models of computation, such as Petri nets, or register and timed automata.

Further details to be found at https://mimuw.edu.pl/~sl.

Title of invited lecture:
Ackermannian lower bound for the reachability problem of Petri nets

Petri nets are an established model of concurrent systems with widespread applications. The reachability problem, where we ask whether from a given initial configuration there exists a sequence of valid execution steps reaching a given final configuration, is the central algorithmic problem for this model. The complexity of the problem has remained, until recently, one of the hardest open questions in verification of concurrent systems. In particular, the only known lower bound for the problem was the exponential space one shown
by Lipton in 1976.

I will present a recently proposed breakthrough construction that led first to non-elementary lower bound in 2019, and then to non-primitive recursive one in 2021. Paired with the recent obtained Ackermannian upper bound, these results close the complexity gap that remained open for over 40 years. I will also sketch further directions and related question that are still open.