8:30 - 9:00 | Registration - Opening |
9:00 - 10:20 |
Tutorial: Perspectives on learning in games - Part I (slides)
Gabriele Farina, MIT
Abstract: The main focus of the tutorial will be about learning in games:
The first part of the tutorial will be based on no-regret learning algorithms such as regret matching,
FTRL, OMD and and their optimistic variants for normal-form games. The second part of the tutorial will
be about sequential games, build up to imperfect-information games, regret decomposition,
and if time permits, kernelization.
Short Bio: Gabriele Farina is an Assistant Professor at MIT in EECS and LIDS, additionally affiliated with the Operations Research Center (ORC). He holds the X-Window Consortium Career Development Chair. Before that, he spent a year as a Research Scientist at FAIR (Meta AI), where he worked on Cicero, a human-level AI agent combining strategic reasoning and natural language. Before that, he was a Ph.D. student in the Computer Science Department at Carnegie Mellon University, where he worked with Tuomas Sandholm. He was supported by a 2019-2020 Facebook Fellowship in the area of Economics and Computation and he is the recipient of ACM SIGecom Doctoral Dissertation Award and NeurIPS 2020 Best Paper Award. |
10:20 - 10:50 | Coffee break |
10:50 - 11:30 |
Non-clairvoyant scheduling with (untrusted) predictions
Evripidis Bampis, Sorbonne University
Abstract: We revisit classical non-clairvoyant scheduling problems where the processing time of each job is not known until the job finishes. We adopt the framework of learning-augmented algorithms and we study the question of whether (possibly erroneous) predictions may help design algorithms with a competitive ratio which is good when the prediction is accurate (consistency), deteriorates gradually with respect to the prediction error (smoothness), and not too bad and bounded when the prediction is arbitrarily bad (robustness).
Short Bio: Evripidis Bampis has received his Diploma of Electrical Engineering from the National Technical University of Athens in 1989, and his Msc and PhD from the University of Paris-Sud (Orsay) in 1990 and 1993, respectively. He joined the department of Computer Science of the University of Evry (France) in September 1993, and he served as an Assistant Professor from 1993 to 1998 and then as a Professor. He was leading the Optimization and Algorithms group at the IBISC laboratory from 1998 to 2010. He was also the director of the School of Doctoral Studies. Since September 2010, he has been appointed at UPMC, now Sorbonne University, as a Professor, and he is a member of the Operations Research group of the laboratory LIP6. He served as the Director of the Department of the Master in Computer Science at Sorbonne University from 2017 to 2020. His main research interests concern the design and the analysis of algorithms for scheduling and graph problems. He is in particular interested in approximation and on-line algorithms, algorithmic game theory, and learning augmented algorithms. He was the coordinator or participant of various national, european, or international projects. |
11:30 - 11:45 | Short break |
11:45 - 12:10 |
A proof of the Nisan-Ronen conjecture
(slides)
Annamaria Kovacs, Goethe University Frankfurt
Abstract: We show that the best approximation ratio of deterministic truthful mechanisms for makespan-minimization for $n$ unrelated machines is $n$,
as it was conjectured by Noam Nisan and Amir Ronen.
Short Bio: Annamaria studied mathematics at Eötvös Lorand University, Budapest, Hungary. She received her PhD in Theoretical Computer Science from Saarland University / Max Planck Institute for Informatics in Saarbrücken, Germany in 2007. She has been a research associate at Goethe University Frankfurt since 2012. Annamaria's research focus is in the field of scheduling algorithms and algorithmic mechanism design. Her results include: the first deterministic truthful PTAS for related machine scheduling, a seminal paper on simultaneous item bidding auctions, and the proof of the Nisan Ronen conjecture. |
12:10 - 12:35 |
The Complexity of Equilibrium Computation in First-Price Auctions
Aris Filos-Ratsikas, University of Edinburgh
Abstract: In a First-Price auction, there is one item for sale and a set of bidders who submit their bids for the item;
the winner is the bidder with the highest bid, and the payment is that bidder's bid. It is well-known that this auction format,
while being fundamental and widely used in practice, provides incentives to the bidders to underbid, aiming to win the item at a favourable price.
In this talk, I will present results on the computational complexity of computing Bayes-Nash equilibria of the First Price auction, when the prior beliefs about other bidders' values come from either continuous or discrete distributions. For the case of subjective priors, we show that it is PPAD-complete to compute either pure equilibria for continuous priors or mixed equilibria for discrete priors, and in fact that there is an inherent computational equivalence between those two settings. For pure equilibria and discrete subjective priors, we show that the problem of deciding equilibrium existence is NP-complete. I will also present some positive results for the case of symmetric equilibria and iid bidders. From joint works with Yiannis Giannakopoulos, Alexandros Hollender, Philip Lazos, and Diogo Pocas, and with Yiannis Giannakopoulos, Alexandros Hollender, and Charalampos Kokkalis. Short Bio: Aris Filos-Ratsikas is a Lecturer (Assistant Professor) at the University of Edinburgh. His research lies in the intersection of theoretical computer science and artificial intelligence, with an emphasis on algorithms and computational complexity. In particular, he is interested in problems related to social choice theory, fair division, competitive markets, game theory and mechanism design. He obtained his PhD degree from the Computer Science Department of Aarhus University, Denmark in 2015, under the supervision of Peter Bro Miltersen. In the past, he was a Lecturer at the University of Liverpool, a postdoctoral researcher at École Polytechnique Fédérale de Lausanne, and a postdoctoral research assistant at the University of Oxford. |
12:35 - 13:00 |
Constant Inapproximability for Fisher Markets
(slides)
Argyrios Deligkas, Royal Holloway University of London
Abstract: We study the problem of computing approximate market equilibria in Fisher markets with separable piecewise-linear
concave (SPLC) utility functions. In this setting, the problem was only known to be PPAD-complete for inverse-polynomial approximations.
We strengthen this result by showing PPAD-hardness for constant approximations. This means that the problem does not admit a polynomial
time approximation scheme (PTAS) unless PPAD=P. In fact, we prove that computing any approximation better than 1/11 is PPAD-complete.
As a direct byproduct of our main result, we get the same inapproximability bound for Arrow-Debreu exchange markets with SPLC utility functions.
Joint work with John Fearnley, Alexandros Hollender, and Themistoklis Melissourgos. Accepted for publication at EC 2024. Short Bio: Argyrios Deligkas is a Senior Lecturer (Associate Professor) at Royal Holloway University of London. He received his PhD from the University of Liverpool. His research interests span over Econ-CS, with a focus on equilibrium computation problems, Computational Social Choice, and Temporal Networks. |
13:00 - 14:30 | Lunch Break |
14:30 - 15:50 |
Tutorial: Wonders of high-dimensions: the Maths and Physics of Machine Learning - Part I
(slides)
Bruno Loureiro, ENS and CNRS
Abstract:
The past decade has witnessed a surge in the development and adoption of machine learning algorithms to solve day-a-day computational tasks. Yet, a solid theoretical understanding of even the most basic tools used in practice is still lacking, as traditional statistical learning methods are unfit to deal with the modern regime in which the number of model parameters are of the same order as the quantity of data – a problem known as the curse of dimensionality. Curiously, this is precisely the regime studied by Physicists since the mid 19th century in the context of interacting many-particle systems. This connection, which was first established in the seminal work of Elisabeth Gardner and Bernard Derrida in the 80s, is the basis of a long and fruitful marriage between these two fields.
The goal of this tutorial is to provide an in-depth overview of these connections, from the early days to more recent developments. After a brief historical summary, we will introduce some useful techniques from the statistical physics toolbox through the paradigmatic study of shallow neural works, starting from their generalisation properties at initialisation (and its connection to kernel methods) to the feature learning regime after few steps of training. Short Bio: Bruno Loureiro is currently a CNRS researcher based at the Département d’Informatique of the Ecole Normale Supérieure in Paris, working on the crossroads between machine learning and statistical mechanics. After a PhD at the University of Cambridge, he held postdoctoral positions at IPhT in Paris and EPFL in Lausanne. He is interested in Bayesian inference, theoretical machine learning and high-dimensional statistics more broadly. His research aims at understanding how data structure, optimisation algorithms and architecture design come together in successful learning. |
15:50 - 16:20 | Coffee Break |
16:20 - 17:00 |
Learning from time series data using information-theoretic methods
(slides)
Ioannis Kontoyannis, University of Cambridge
Abstract:
A hierarchical Bayesian framework is introduced for developing rich mixture
models for real-valued time series, partly motivated by important applications
in financial time series analysis. The model construction combines information-
theoretic ideas originally developed in the context of the context-tree weighting
algorithm, with standard statistical models for time series. We call the overall
construction the Bayesian Context Trees State Space Model (or BCT-X) framework.
Efficient algorithms are introduced that allow for effective, exact Bayesian
inference, and which can be updated sequentially, facilitating effective
forecasting. The utility of the general framework is illustrated on real-world
financial data, where the BCT-X framework revealed novel, natural structure
present, in the form of an enhanced leverage effect that had not been identified
before. This is joint work with Ioannis Papageorgiou.
Short Bio: Ioannis Kontoyiannis was born in Athens, Greece, in 1972. He studied mathematics at Imperial College and Cambridge, and he received the M.S. degree in statistics, and the Ph.D. degree in electrical engineering, both from Stanford University. In 1995 he worked at IBM Research, on a NASA-IBM project. From 1998 to 2001 he was with Purdue University. Between 2000 and 2005 he was with the Division of Applied Mathematics and the Department of Computer Science at Brown University. Between 2005 and 2021 he was with the Athens University of Economics and Business. In 2009 he was a visiting professor with the Department of Statistics at Columbia University. Between 2018 and 2020 he was with the Department of Engineering of the University of Cambridge, where he was Head of the Signal Processing and Communications Laboratory. In 2020 he joined the Department of Pure Mathematics and Mathematical Statistics at the University of Cambridge, where he is the Churchill Professor of Mathematics of Information. He is a Fellow of Darwin College, Cambridge, and an Associate Member of the Signal Processing and Communications Laboratory, at the Information Engineering Division within the Department of Engineering. He has been awarded a Manning endowed assistant professorship (Brown University), a Sloan Foundation Research Fellowship, an honorary Master of Arts Degree Ad Eundem (Brown University) and a Marie Curie Fellowship. He is a Fellow of the IEEE (Institute of Electrical and Electronic Engineers), of the AAIA (Asia-Pacific Artificial Intelligence Association), and of the IMS(Institute of Mathematical Statistics). |
17:00 - 17:10 | Short break |
17:10 - 18:30 |
Tutorial: Contextual Reinforcement Learning without the Contexts, a.k.a., Latent Markov Decision Processes - Part I
Constantine Caramanis, UT-Austin
Abstract:
In many interactive decision-making settings, there is latent and unobserved information that remains fixed. This is in fact the case in any
interactive system where the decision-maker interacts over a time horizon with another user (or system), but does not (or cannot) know everything about that
user - the unknown elements make this a partially observed problem, but a special one, since they remain fixed over each interaction horizon with the same user. Thus this is one of the fundamental unsolved settings of Reinforcement Learning.
Examples of this setting abound. Consider, for example, a dialogue system, where complete information about a user (e.g., the user's preferences) is not given. Similarly, we might consider the medical treatment of a patient over a period of time, where again not everything about the patient, the patient's preferences or response to a given course of medical treatment, is known. In such an environment, the latent information remains fixed throughout each episode, since the identity of the user does not change during an interaction. This type of environment can be modeled as a Latent Markov Decision Process (LMDP). This is a special instance of Partially Observed Markov Decision Processes (POMDPs). However, as we discuss in this tutorial, in many cases of interest, using results from POMDPs typically does not work. In this tutorial, we consider this important setting within the Reinforcement Learning paradigm. We are interested in understanding the modeling framework of LMDPs, the known information-theoretic lower bounds, and the established achievability results. Short Bio: Constantine Caramanis is a Professor in Electrical and Computer Engineering at UT Austin. He received the Ph.D. degree in EECS from MIT. He is a recipient of a NSF CAREER award, and is an IEEE Fellow. His research interests focus on optimization, machine learning and statistics. |
9:00 - 10:20 |
Tutorial: Wonders of high-dimensions: the Maths and Physics of Machine Learning - Part II
(slides)
Bruno Loureiro, ENS and CNRS
Abstract:
The past decade has witnessed a surge in the development and adoption of machine learning algorithms to solve day-a-day computational tasks. Yet, a solid theoretical understanding of even the most basic tools used in practice is still lacking, as traditional statistical learning methods are unfit to deal with the modern regime in which the number of model parameters are of the same order as the quantity of data – a problem known as the curse of dimensionality. Curiously, this is precisely the regime studied by Physicists since the mid 19th century in the context of interacting many-particle systems. This connection, which was first established in the seminal work of Elisabeth Gardner and Bernard Derrida in the 80s, is the basis of a long and fruitful marriage between these two fields.
The goal of this tutorial is to provide an in-depth overview of these connections, from the early days to more recent developments. After a brief historical summary, we will introduce some useful techniques from the statistical physics toolbox through the paradigmatic study of shallow neural works, starting from their generalisation properties at initialisation (and its connection to kernel methods) to the feature learning regime after few steps of training. Short Bio: Bruno Loureiro is currently a CNRS researcher based at the Département d’Informatique of the Ecole Normale Supérieure in Paris, working on the crossroads between machine learning and statistical mechanics. After a PhD at the University of Cambridge, he held postdoctoral positions at IPhT in Paris and EPFL in Lausanne. He is interested in Bayesian inference, theoretical machine learning and high-dimensional statistics more broadly. His research aims at understanding how data structure, optimisation algorithms and architecture design come together in successful learning. |
10:20 - 10:50 | Coffee break |
10:50 - 11:30 |
Two stories about distortion in social choice
(slides)
Ioannis Caragiannis, Aarhus University
Abstract: The notion of distortion has received much attention in recent years by the computational social choice community.
In general, distortion quantifies how the lack of complete information affects the quality of the social choice outcome.
Ideally, a distortion of 1 means that the social choice outcome is the most efficient one.
In the talk, we will consider two related scenarios. The first one is inspired by voting under the impartial culture assumption. We assume that agents have random values for the alternatives, drawn from a probability distribution independently for every agent-alternative pair. We explore voting rules that use a limited number of queries per agent in addition to the agent's ordinal information. For simple distributions, we present rules that always select an alternative of social welfare that is only a constant factor away from the optimal social welfare (i.e., rules of constant distortion). The second scenario is motivated by the practice of sortition. Here, we assume that agents correspond to points on a metric space. Our objective is to select, in a fair manner, a subset of the agents (corresponding to a citizens' assembly) so that for every election with alternatives from the same metric space, the most preferred alternative of the citizens' assembly has a social cost that is very close to that of the optimal alternative for the whole agent population. Our positive results indicate that assemblies of size logarithmic in the number of alternatives are sufficient to get constant distortion in this model. The talk is based on two papers that are joint works with Karl Fehrs, and with Evi Micha and Jannik Peters, respectively. Short Bio: Ioannis Caragiannis (Computer Scientist and Engineer; Diploma, 1996; PhD, 2002, University of Patras, Greece) is a Professor at the Department of Computer Science of Aarhus University, Denmark, where he also serves as the Head of the research group on Computational Complexity and Game Theory. His research interests include design and analysis of algorithms (including approximation and online algorithms), economics and computation (computational aspects of fair division, voting, matching problems, auctions, and strategic games), and foundations of machine learning and artificial intelligence (including strategic aspects of learning tasks and data processing). He has more than 190 publications in conference proceedings, journals, or as book chapters. For his research, he has received the 2024 Prize in Game Theory and Computer Science in honour of Ehud Kalai, the 2022 Artificial Intelligence Journal Prominent Paper Award, and a Distinguished Paper Honorable Mention at IJCAI 2019. |
11:30 - 11:45 | Short break |
11:45 - 12:10 |
Alternation makes the adversary weaker in two-player games
(slides)
Stratis Skoulakis, EPFL
Abstract: Motivated by alternating game-play in two-player games, we study an altenating variant of the
Online Linear Optimization (OLO).
In alternating OLO, a learner at each round $t \in [n]$
selects a vector $x^t$ and then an adversary selects a cost-vector $c^t \in [-1,1]^n$.
The learner then experiences cost $(c^t + c^{t-1})^\top x^t$ instead of $(c^t)^\top x^t$ as in standard OLO. We establish that under this small twist,
the $\Omega(\sqrt{T})$ lower bound on the regret is no longer valid. More precisely, we present two online learning algorithms for alternating OLO
that respectively admit $\mathcal{O}((\log n)^{4/3} T^{1/3})$ regret for the $n$-dimensional simplex and $\mathcal{O}(\rho \log T)$ regret for the
ball of radius $\rho>0$. Our results imply that in alternating game-play, an agent can always guarantee $\mathcal{\tilde{O}}((\log n)^{4/3} T^{1/3})$ regardless
the strategies of the other agent while the regret bound improves to $\mathcal{O}(\log T)$ in case the agent admits only two actions.
Short Bio: Stratis is a postdoctoral research fellow at the Laboratory for Information and Inference Systems hosted at EPFL. His interests lie at the intersection of Game Theory, Optimization and Machine Learning. Stratis received his Ph.D. in Algorithmic Game Theory from the National Technical University of Athens under the supervision of Dimitris Fotakis. From 2019-2021 he was a postdoctoral research fellow at Singapore University of Technology. |
12:10 - 12:35 |
Fairly Allocating Indivisible Goods to Strategic Agents
(slides)
Georgios Birmpas, University of Liverpool
Abstract: We consider the problem of fairly allocating a set of indivisible goods to a set of strategic agents with additive valuation
functions. We assume no monetary transfers and, therefore, a mechanism in our setting is an algorithm that takes as input the reported - rather than
the true - values of the agents. Our main goal is to explore whether there exist mechanisms that have pure Nash equilibria for every instance and,
at the same time, provide fairness guarantees for the allocations that correspond to these equilibria. We focus on a relaxation of envy-freeness,
namely envy-freeness up to one good (EF1), and we positively answer the above question. In particular, we study an algorithm that is known to
produce such allocations in the non-strategic setting: Round-Robin. We show that all of its pure Nash equilibria induce allocations that are
EF1 with respect to the underlying true values, while we also prove that surprisingly, a version of this result holds even for agents with
cancelable or submodular valuation functions.
Short Bio: Georgios Birmpas is currently a lecturer (assistant professor) in the Department of Computer Science of University of Liverpool. Before that, he worked for three years as a postdoctoral researcher at the Department of Computer, Control, and Management Engineering of Sapienza University of Rome Sapienza University of Rome, and for 1.5 years as a research associate at the Department of Computer Science of University of Oxford. He got his PhD from the Department of Informatics of Athens University of Economics and Business, a MSc in Logic and Theory of Algorithms (ΜΠΛΑ) from the Department of Mathematics of University of Athens, and his Diploma from the School of Applied Mathematics and Physics of National Technical University of Athens. His research interests include Algorithmic Game Theory, Approximation Algorithms, Fair Division, and Computational Social Choice. |
12:35 - 13:00 |
Transfer Learning Beyond Bounded Density Ratios
(slides)
Alkis Kalavasis, Yale
Abstract: We study the fundamental problem of transfer learning where a learning algorithm collects data from some source distribution $P$ but
needs to perform well with respect to a different target distribution $Q$. A standard change of measure argument implies that transfer learning happens when
the density ratio $dQ/dP$ is bounded. Yet, prior thought-provoking works by Kpotufe and Martinet (COLT, 2018) and Hanneke and Kpotufe (NeurIPS, 2019)
demonstrate cases where the ratio $dQ/dP$ is unbounded, but transfer learning is possible. In this talk, we will discuss about a general transfer inequality over the
Euclidean domain, proving that non-trivial transfer learning for low-degree polynomials is possible under very mild assumptions, going well beyond the classical
assumption that $dQ/dP$ is bounded. For instance, it always applies if $Q$ is a log-concave measure and the inverse ratio $dP/dQ$ is bounded. This is based on joint
work with Ilias Zadik and Manolis Zampetakis.
Short Bio: Alkis is an FDS Postdoctoral Fellow at Yale University working on learning theory and its connections with statistics, computational complexity and optimization. Before that, Alkis was a PhD student in the Computer Science Department of the National Technical University of Athens (NTUA) working with Dimitris Fotakis and Christos Tzamos. He completed his undergraduate studies in the School of Electrical and Computer Engineering of the NTUA, where he was advised by Dimitris Fotakis. |
13:00 - 14:30 | Lunch Break |
14:30 - 15:50 |
Tutorial: Contextual Reinforcement Learning without the Contexts, a.k.a., Latent Markov Decision Processes - Part II
Constantine Caramanis, UT-Austin
Abstract: In many interactive decision-making settings, there is latent and unobserved information that remains fixed.
This is in fact the case in any interactive system where the decision-maker interacts over a time horizon with another user (or system),
but does not (or cannot) know everything about that user - the unknown elements make this a partially observed problem, but a special one, since they remain fixed over each interaction horizon with the same user. Thus this is one of the fundamental unsolved settings of Reinforcement Learning.
Examples of this setting abound. Consider, for example, a dialogue system, where complete information about a user (e.g., the user's preferences) is not given. Similarly, we might consider the medical treatment of a patient over a period of time, where again not everything about the patient, the patient's preferences or response to a given course of medical treatment, is known. In such an environment, the latent information remains fixed throughout each episode, since the identity of the user does not change during an interaction. This type of environment can be modeled as a Latent Markov Decision Process (LMDP). This is a special instance of Partially Observed Markov Decision Processes (POMDPs). However, as we discuss in this tutorial, in many cases of interest, using results from POMDPs typically does not work. In this tutorial, we consider this important setting within the Reinforcement Learning paradigm. We are interested in understanding the modeling framework of LMDPs, the known information-theoretic lower bounds, and the established achievability results. Short Bio:Constantine Caramanis is a Professor in Electrical and Computer Engineering at UT Austin. He received the Ph.D. degree in EECS from MIT. He is a recipient of a NSF CAREER award, and is an IEEE Fellow. His research interests focus on optimization, machine learning and statistics. |
15:50 - 16:20 | Coffee Break |
16:20 - 17:00 |
Learning-Augmented Mechanism Design
(slides)
Vasilis Gkatzelis, Drexel University
Abstract: This talk will introduce the model of "learning-augmented mechanism design" (or "mechanism design with predictions"),
which is an alternative model for the design and analysis of mechanisms in strategic settings. Aiming to complement the traditional approach in
computer science, which analyzes the performance of algorithms based on worst-case instances, recent work on "algorithms with predictions"
has developed algorithms that are enhanced with machine-learned predictions regarding the optimal solution.
The algorithms can use this information to guide their decisions and the goal is to achieve much stronger performance guarantees when
these predictions are accurate (consistency) while also maintaining good worst-case guarantees, even if these predictions are very inaccurate (robustness).
This talk will focus on the adaptation of this framework into mechanism design and provide an overview of some results along this line of work.
Short Bio: Vasilis Gkatzelis is an associate professor of computer science at the College of Computing & Informatics of Drexel University, and he is a recipient of the NSF CAREER award. Prior to joining Drexel, he held positions as a postdoctoral scholar at the computer science departments of UC Berkeley and Stanford University, and as a research fellow at the Simons Institute for the Theory of Computing. He received his PhD from the Courant Institute of New York University and his research focuses on problems in algorithmic game theory and approximation algorithms. |
17:00 - 17:10 | Short break |
17:10 - 18:30 |
Tutorial: Perspectives on learning in games - Part II
(slides)
Gabriele Farina, MIT
Abstract: The main focus of the tutorial will be about learning in games:
The first part of the tutorial will be based on no-regret learning algorithms such as regret matching,
FTRL, OMD and and their optimistic variants for normal-form games. The second part of the tutorial will
be about sequential games, build up to imperfect-information games, regret decomposition,
and if time permits, kernelization.
Short Bio: Gabriele Farina is an Assistant Professor at MIT in EECS and LIDS, additionally affiliated with the Operations Research Center (ORC). He holds the X-Window Consortium Career Development Chair. Before that, he spent a year as a Research Scientist at FAIR (Meta AI), where he worked on Cicero, a human-level AI agent combining strategic reasoning and natural language. Before that, he was a Ph.D. student in the Computer Science Department at Carnegie Mellon University, where he worked with Tuomas Sandholm. He was supported by a 2019-2020 Facebook Fellowship in the area of Economics and Computation and he is the recipient of ACM SIGecom Doctoral Dissertation Award and NeurIPS 2020 Best Paper Award. |