11:00  12:00 
Plenary Session
Single cell dissection of human disease circuitry Manolis Kellis, MIT
Abstract:
Diseaseassociated nucleotides lie primarily in noncoding regions, increasing the urgency of understanding how generegulatory circuitry impacts human disease. To address this challenge, we generate transcriptional and epigenomic maps of 823 human tissues, 1500 individuals, and 7.5 million single cells. We link variants to target genes, upstream regulators, cell types of action, and perturbed pathways, and predict causal genes and regions to provide unbiased views of disease mechanisms, sometimes reshaping our understanding. We find that Alzheimer’s variants act primarily through immune processes, rather than neuronal processes, and the strongest genetic association with obesity acts via energy storage/dissipation rather than appetite/exercise decisions. We combine singlecell profiles, tissuelevel variation, and genetic variation across healthy and diseased individuals to deconvolve bulk profiles into singlecell profiles, to recognize changes in cell type proportion associated with disease and aging, to partition genetic effects into the individual cell typeswhere they act, and to recognize celltypespecific and diseaseassociated somatic mutations in exonic regions indicative of mosaicism. We expand these methods to electronic health records to recognize metaphenotypes associated with combinations of clinical notes, prescriptions, lab tests, and billing codes, to impute missing phenotypes in sparse medical records, and to recognize the molecular pathways underlying complex metaphenotypes in genotyped individualsby integration of molecular phenotypes imputed in diseaserelevant cell types. Lastly, we develop programmable and modular technologies for manipulating these pathways by highthroughput reporter assays, genome editing,and gene targeting in human cells and mice, demonstrating tissueautonomous therapeutic avenues in Alzheimer’s, obesity, and cancer. These results provide a roadmap for translating genetic findings into mechanistic insights and ultimately new therapeutic avenues for complex disease and cancer. Bio:
Manolis Kellis is a professor of computer science at MIT, a member of the Broad Institute of MIT and Harvard, a principal investigator of the Computer Science and Artificial Intelligence Lab at MIT, and head of the MIT Computational Biology Group (compbio.mit.edu). His research includes disease circuitry, genetics, genomics, epigenomics,coding genes, noncoding RNAs, regulatory genomics, and comparative genomics, applied to Alzheimer's Disease, Obesity, Schizophrenia, Cardiac Disorders, Cancer, and Immune Disorders, and multiple other disorders. He has led several largescale genomics projects, including the Roadmap Epigenomics project, the ENCODE project, the Genotype TissueExpression (GTEx) project, and comparative genomics projects in mammals, flies, and yeasts. He received the US Presidential Early Career Award in Science and Engineering (PECASE) by US President Barack Obama, the Mendel Medal for Outstanding Achievements in Science, the NSF CAREER award, the Alfred P.Sloan Fellowship, the Technology Review TR35 recognition, the AIT Niki Award,and the Sprowls award for the best Ph.D. thesis in computer science at MIT. He has authored over 240 journal publications cited more than 115,000 times. He has obtained more than 20 multiyear grants from the NIH, and his trainees hold faculty positions at Stanford, Harvard, CMU, McGill, Johns Hopkins, UCLA, and other top universities. He lived in Greece and France before moving to the US, and he studied and conducted research at MIT, the Xerox Palo Alto Research Center, and the Cold Spring Harbor Lab. For more info, see: compbio.mit.edu and kellislab.com.

12:00  12:15  Break 
12:15  13:30 
Session 4
Faster Approximate Pattern Matching: A Unified Approach Panagiotis Charalampopoulos, The Interdisciplinary Center Herzliya
Abstract:
Approximate pattern matching is a natural and wellstudied problem on strings: Given a text T, a pattern P, and a threshold k, find (the starting positions of) all substrings of T that are at distance at most k from P. We consider the
two most fundamental string metrics: the Hamming distance and the edit distance. Under the Hamming distance, we search for substrings of T that have at most k mismatches with P, while under the edit distance, we search for substrings of
T that can be transformed to P with at most k edits.
Exact occurrences of P in T have a very simple structure: If we assume for simplicity that $T \le 3P/2$ and trim T so that P occurs both as a prefix and as a suffix of T, then both P and T are periodic with a common period. However,
an analogous characterization for the structure of occurrences with up to k mismatches was proved only recently by Bringmann et al. [SODA'19]: Either there are $O(k^2)$ kmismatch occurrences of P in T, or both P and T are at
Hamming distance $O(k)$ from strings with a common period $O(m/k)$. We tighten this characterization by showing that there are $O(k)$ kmismatch occurrences in the case when the pattern is not (approximately) periodic, and we lift it to
the edit distance setting, where we tightly bound the number of kedit occurrences by $O(k^2)$ in the nonperiodic case. Our proofs are constructive and let us obtain a unified framework for approximate pattern matching for both
considered distances. We showcase the generality of our framework with results for the fullycompressed setting (where T and P are given as a straightline programs) and for the dynamic setting (where we extend a data structure of
Gawrychowski et al. [SODA'18]).
A longer abstract can be found here: https://arxiv.org/pdf/2004.08350.pdf. This paper was presented in FOCS 2020. On the Approximability of Multistage MinSum Set Cover Stratis Skoulakis, Singapore University of Technology and Design
Abstract:
TBA
Faster Algorithms for kSubset Sum and Variations Manolis Vasilakis, National Technical University of Athens
Abstract:
We present new, faster pseudopolynomial time algorithms for the $k$Subset Sum problem, defined as follows: given a set $Z$ of $n$ positive integers and $k$ targets $t_1, \ldots, t_k$, determine whether there exist $k$ disjoint subsets
$Z_1,\dots,Z_k \subseteq Z$, such that $\Sigma(Z_i) = t_i$, for $i = 1, \ldots, k$.
Assuming $t = \max \{ t_1, \ldots, t_k \}$ is the maximum among the given targets, a standard dynamic programming approach based on Bellman's algorithm can solve the problem in $O(n t^k)$ time.
We build upon recent advances on extsc{Subset Sum} due to Koiliaris and Xu [Koil19] and Bringmann [Brin17] in order to provide faster algorithms for $k$Subset Sum.
We devise two algorithms: a deterministic one of time complexity $ \tilde{O}(n^{k / (k+1)} t^k)$ and a randomised one of $ \tilde{O}(n + t^k)$ complexity.
We further demonstrate how these algorithms can be used in order to cope with variations of $k$Subset Sum, namely Subset Sum Ratio, $k$Subset Sum Ratio and Multiple Subset Sum.

13:30  14:30  Break 
14:30  15:30 
Plenary Talk
Quantum Machine Learning: Prospects And Challenges Iordanis Kerenidis, CNRSUniversity of Paris and QC Ware
Abstract:
We will review recent work on Quantum Machine Learning and discuss the prospects and challenges of applying this new exciting computing paradigm to machine learning applications.
Bio:
Iordanis Kerenidis (CNRS and QC Ware) received his Ph.D. from the Computer Science Department at the University of California, Berkeley, in 2004. After a twoyear postdoctoral position at the Massachusetts Institute of Technology,
he
joined the Centre National de Recherche Scientifique in Paris as a permanent researcher. He has been the coordinator of a number of EUfunded projects including an ERC Grant, and he is the founder and director of the Paris Centre
for
Quantum Computing. His research is focused on quantum algorithms for machine learning and optimization, including work on recommendation systems, classification and clustering. He is currently working as the Head of Quantum
Algorithms
Int. at QC Ware Corp.

15:30  15:45  Break 
15:45  17:00 
Session 5
The Largest Connected Subgraph Game Foivos Fioravantes, Université Côte d’Azur, CNRS, Inria, I3S, France
Abstract:
In this talk, we introduce the largest connected subgraph game, played on a graph G. In each round, Alice first colours an uncoloured vertex
of G red, and then, Bob colours an uncoloured vertex of G blue, with all vertices
initially uncoloured. Once all the vertices are coloured, Alice (Bob, resp.) wins
if there is a red (blue, resp.) connected subgraph of G whose order is greater than
the order of any blue (red, resp.) connected subgraph of G. We first prove that Bob
can never win this game, and define a large class of graphs (called reflection graphs) in
which the game is a draw. We then show that determining the outcome of the
game is PSPACEcomplete, even in bipartite graphs of small diameter, and that the problem of
recognising reflection graphs is GIhard. We also prove that the game is a draw
in paths if and only if the path is of even order or has at least 11 vertices, and
that Alice wins in cycles if and only if the cycle is of odd length. Lastly, we give
a linear time algorithm that determines the outcome of the game in cographs.
This is a joint work with Julien Bensmail, Fionn Mc Inerney, and Nicolas Nisse. Byzantine Fault Tolerant SymmetricPersistent Circle Evacuation Giannis Papaioannou, National Technical University of Athens
Abstract:
We consider $(n,f)$evacuation on a circle, an evacuation problem of a hidden exit on the perimeter of a unit radius circle for $n>1$ robots, $f$ of which are faulty. All the robots start at the center of the circle and move with
maximum speed~1. Robots must first find the exit and then move there to evacuate in minimum time. The problem is considered complete when all the honest robots know the correct position of the exit and the last honest robot has
evacuated through the exit. During the search, robots can communicate wirelessly.
We focus on symmetricpersistent algorithms, that is, algorithms in which all robots move directly to the circumference, start searching the circle moving in the same direction (cw or ccw), and do not stop moving around the circle
before receiving information about the exit. We study the case of $(n,1)$ and $(n,2)$ evacuation. We first prove a lower bound of $1+\frac{4\pi}{n}+2\sin(\frac{\pi}{2}\frac{\pi}{n})$ for one faulty robot even a crashfaulty one. We
also observe an almost matching upper bound obtained by means of an earlier search algorithm. We finally study the case with two Byzantine robots and we provide an algorithm that achieves evacuation in time at most $3+\frac{6\pi}{n}$ ,
for $n \geq 9$, or at most $3+\frac{6\pi}{n}+\delta(n)$.
Efficient Online Scheduling of Electric Vehicle Charging Using a ServicePrice Menu Angeliki Mathioudaki, National Technical University of Athens
Abstract:
Along with high penetration of Electric Vehicles (EVs), charging stations are required to service a large amount of charging requests while accounting for constraints on the station's peak electricity consumption. To this end, a
charging station needs to make online charging scheduling decisions often under limited future information. An important challenge relates to the prioritization of EVs that have unknown valuations for different levels of charging
services. In this paper, we take into consideration the inability of EV users to express these valuations explicitly. We consider a paradigm where a menu of possible charging schedules and corresponding prices is generated online. By
letting the EV users pick their most preferable menu option, the proposed algorithm commits on each EV's charging completion time upon its arrival, achieves a near optimal total weighted charging completion time, and prevents the
users from strategically misreporting their preferences, while offering a practical and implementable solution to the problem of EVs  charging station interaction.

17:00  17:20  Break 
17:20  19:00 
Session 6
Adversarially Robust Principal Component Analysis Vaggos Chatziafratis, Northwestern University and UC Santa Cruz
Abstract:
Many machine learning systems are vulnerable to small perturbations made to inputs either at test time or at training time. This has received much recent interest on the empirical front due to applications where reliability and security
are critical. However, theoretical understanding of algorithms that are robust to adversarial perturbations is limited.
In this work we focus on Principal Component Analysis (PCA), a ubiquitous algorithmic primitive in machine learning. We formulate a natural robust variant of PCA where the goal is to find a low dimensional subspace to represent the
given data with minimum projection error, that is in addition robust to small perturbations. Unlike PCA which is solvable in polynomial time, our formulation is computationally intractable to optimize as it captures a variant of the
wellstudied sparse PCA objective as a special case. In the talk, I will mainly focus on worstcase approximation algorithms that find constant factor approximations with respect to the bst subspace, in terms of the projection error and
the robustness criterion. If time allows, we will also discuss several applications of our PCA formulation.
Efficient Algorithms for Learning from Coarse Labels Alkis Kalavasis, National Technical University of Athens
Abstract:
For many learning problems one may not have access to fine grained label information; e.g.,
an image can be labeled as husky, dog, or even animal depending on the expertise of the annotator.
In this work, we formalize these settings and study the problem of learning from such coarse data.
Instead of observing the actual labels from a set $\mathcal{Z}$,
we observe coarse labels corresponding to a partition of $\mathcal{Z}$ (or a mixture of partitions).
Our main algorithmic result is that essentially any problem learnable from fine grained labels
can also be learned efficiently when the coarse data are sufficiently informative.
We obtain our result through a generic reduction for answering Statistical Queries (SQ)
over fine grained labels given only coarse labels. The number of coarse labels
required depends polynomially on the information distortion due to coarsening
and the number of fine labels $\mathcal{Z}$.
We also investigate the case of (infinitely many) real valued labels
focusing on a central problem in censored and truncated statistics:
Gaussian mean estimation from coarse data.
We provide an efficient algorithm when the sets in the partition are convex and establish that the problem is NPhard even for very simple nonconvex sets.
Learning Ising models from one or multiple samples Anthimos Vardis Kandiros, MIT
Abstract:
There have been two separate lines of work on estimating Ising models: (1) estimating them from multiple independent samples under minimal assumptions about the model's interaction matrix; and (2) estimating them from one sample in
restrictive settings. We propose a unified framework that smoothly interpolates between these two settings, enabling significantly richer estimation guarantees from one, a few, or many samples.
Our main theorem provides guarantees for onesample estimation, quantifying the estimation error in terms of the metric entropy of a family of interaction matrices. As corollaries of our main theorem, we derive bounds when the
model's interaction matrix is a (sparse) linear combination of known matrices, or it belongs to a finite set, or to a highdimensional manifold. In fact, our main result handles multiple independent samples by viewing them as one
sample from a larger model, and can be used to derive estimation bounds that are qualitatively similar to those obtained in the aforedescribed multiplesample literature. Our technical approach benefits from sparsifying a model's
interaction network, conditioning on subsets of variables that make the dependencies in the resulting conditional distribution sufficiently weak. We use this sparsification technique to prove strong concentration and anticoncentration
results for the Ising model, which we believe have applications beyond the scope of this paper.
Learning and Covering Sums of Independent Random Variables with Unbounded Support Kostas Stavropoulos, National Technical University of Athens
Abstract:
We study the problem of covering and learning sums $S = X_1 +\cdots+X_n$ of independent random variables $X_i$ with unbounded, or even infinite, support. De et al. (2018) at FOCS 2018, showed that the maximum value of the collective
support of $X_i$’s necessarily appears in the sample complexity of learning $S$. We bypass this lower bound in the general setting where each variable $X_i$ has decreasing probability mass function and is a different member of some,
possibly multiparameter, exponential family E that satisfies some structural properties. These properties allow E to contain heavy tailed distributions and distributions that are not logconcave.
Our main results in this setting are the following. (Covering). For every ε > 0, and E that satisfies some structural assumptions, there exists a set L(E, ε) that covers S within ε in total variation distance, such that the size of L(E, ε) depends on some parameters of Ε but it is independent on the size of the support of $X_i$. In particular, the support can be infinite. (Learning). For every ε > 0, and every kparameter family Ε that satisfies some structural assumptions, there exists an algorithm with $\tilde{Ο}(k)·poly(1/ε)$ samples that learns a sum of n arbitrary members of E within ε in TV distance. Our results are the first to address the problem of learning sums of independent random variables with unbounded support in a very general setting. 
Giuseppe Italiano, LUISS Guido Carli University
Details
Manolis Kellis, MIT
Details
Iordanis Kerenidis, University Paris Diderot
Details
Mihalis Yannakakis, Columbia University
Details
Panagiotis Charalampopoulos. Faster Approximate Pattern Matching: A Unified Approach
Vaggos Chatziafratis. Adversarially Robust Principal Component Analysis
Aris FilosRatsikas. On the Complexity of Equilibrium Computation in FirstPrice Auctions
Foivos Fioravantes. The Largest Connected Subgraph Game
Vasilis Gkatzelis. ResourceAware CostSharing Mechanisms with Priors
Andreas Göbel. A spectral independence view on hardspheres via block dynamics
Alkis Kalavasis. Efficient Algorithms for Learning from Coarse Labels
Fivos Kalogiannis. Minmax optimization in team zerosum games
Vardis Kandiros. Learning Ising models from one or multiple samples
Philip Lazos. Allocating Indivisible Goods to Strategic Agents: Pure Nash Equilibria and Fairness
Angeliki Mathioudaki. Efficient Online Scheduling of Electric Vehicle Charging Using a ServicePrice Menu
Themistoklis Melissourgos. Pizza Sharing is PPAhard
Ioannis Panageas. Global Convergence of MultiAgent Policy Gradient in Markov Potential Games
Orestis Papadigenopoulos. Recurrent Submodular Welfare and Matroid Blocking SemiBandits
Giannis Papaioannou. Byzantine Fault Tolerant SymmetricPersistent Circle Evacuation
Panagiotis Patsilinakos. Strategyproof Facility Location in Perturbation Stable Instances
Stratis Skoulakis. On the Approximability of Multistage MinSum Set Cover
Kostas Stavropoulos. Learning and Covering Sums of Independent Random Variables with Unbounded Support
Stelios Triantafyllou. On Blame Attribution for Accountable MultiAgent Sequential Decision Making
Manolis Vasilakis. Faster Algorithms for kSubset Sum and Variations