|
Abstract
Discrepancy theory tells us that it is possible to divide a set of vectors into two sets that look surprisingly similar to each other. In particular, these sets can be much more similar to each other than those produced by a random division. The development of discrepancy theory has been motivated by applications in fields including combinatorics, geometry, optimization, and functional analysis. But, we expect its greatest impact will be in the design of randomized controlled trials.
Randomized controlled trials are used to test the effectiveness of interventions, like medical treatments and educational innovations. Randomization is used to ensure that the test and control groups are probably similar. If we know nothing about the experimental subjects, a random division of subjects into test and control groups is the best choice. But, when we do have prior information about the experimental subjects, we can produce random divisions of low discrepancy that lead to more accurate estimates of the effectiveness of treatments.
Until a breakthrough of Bansal in 2010, the major algorithmic problems of discrepancy theory were thought to be computationally intractable. We now know efficient algorithms that solve many discrepancy problems, and the development of these algorithms has led to many new discoveries in discrepancy theory.
In these talks, we will survey the major mathematical and algorithmic results of discrepancy theory, and explain how discrepancy theory can be used to improve randomized controlled trials.
Talk 1: Introduction. This talk will introduce Discrepancy Theory, present a framework for the analysis of Randomized Controlled Trials, and summarize results from my joint work with Chris Harshaw, Fredrik Sävje, and Peng Zhang that uses algorithmic discrepancy theory to improve the design of randomized controlled trials.
Talk 2: Combinatorial and Geometric Discrepancy Theory. We will explain some classical results of Discrepancy Theory due to Beck and Fiala, Spencer, and Banaszczyk, along with more recent work on sparsification, Weaver's Problem, and their connections to the Kadison-Singer Problem. We will mention a few compelling open problems.
Talk 3: Algorithms, Probability, and Statistics. We finish by explaining algorithms that make the guarantees of Discrepancy Theory practical, emphasizing those based on random walks and Brownian motion. We will explain the Gram-Schmidt Walk, which was introduced by Bansal, Dadush, Garg, and Lovett to algorithmically realize the guarantees of Banaszczyk's Theorem, and which we use to design Randomized Controlled Trials.
Lectures
Lecture 1:
Monday, September 16, 3.15–4.15 pm
Room K1 (directions)
Lecture 2:
Tuesday, September 17, 1.15–2.15 pm
Room D2 (directions)
Lecture 3:
Wednesday, September 18, 1.15–2.15 pm
Room D2 (directions)
Sponsored by the Göran Gustafsson Foundation
|