We propose and analyze iterative algorithms that are computationally efficient, statistically sound and adaptive (in some settings). We consider three different frameworks in which data is presented to the learner. First, we consider the Federated (Distributed) Learning (FL) setup, where data is only available at the edge, and a center machine learns various models via iteratively interacting with the edge nodes. Second, we study the canonical setting of supervised batch learning, where all the data and label pairs are available to the learner at the beginning. Third, we examine the framework of online learning, where data is presented in a streaming fashion. In particular, we focus on specific settings like Bandit and Reinforcement Learning (RL).

In the Federated Learning (FL) framework, we address the canonical problems of device heterogeneity, communication bottleneck and adversarial robustness for large scale high dimensional problems. We propose efficient and provable first and second order algorithms, and use ideas like quantization of information and apply several robust aggregation schemes to address the above-mentioned problems, while retaining the optimal statistical rates simultaneously. For the (supervised) batch learning framework, we use an efficient and statistically sound algorithm, namely Alternating Minimization (AM) and address the problem of max-affine regression; a non convex problem that generalizes the classical phase retrieval and closely resembles convex regression. We give convergence guarantees of AM, with near optimal statistical rate. Finally, in the online learning setup, we address the problem of adaptation (model selection) for contextual bandits (linear and beyond) and later extend these techniques to Reinforcement Learning (RL). Our algorithms here are efficient, provable and more importantly adaptive to the problem complexity.




Download Full History