We begin by studying the class of accelerated methods in optimization from a continuous-time perspective. We show that there is a Lagrangian functional that we call the Bregman Lagrangian, which generates a family of dynamics via the variational principle of least action, and these dynamics are related via speeding up time. Furthermore, we provide a systematic methodology for discretizing the dynamics into the family of accelerated higher-order algorithms with matching convergence rates in discrete time. Our work illuminates two classes of natural dynamics for optimization, the gradient and Lagrangian dynamics.
Next, we study the problem of approximate inference in graphical models. We analyze reweighted Kikuchi approximation for estimating the log partition function, which approximates the entropy in the variational representation with a region graph decomposition. We establish sufficient conditions for the concavity of the objective function in terms of weight assignments in the Kikuchi expansion, and characterize the polytope of concavity in terms of the cycle structure of the region graph. We also provide an algorithm to find the global optimum and simulations to demonstrate the advantages of the reweighted Kikuchi approach.
Finally, we study the problem of minimax option pricing as an online learning game between Nature and an Investor. Whereas the classical Black-Scholes model assumes the price fluctuates continuously following the geometric Brownian motion, we consider a worst-case model in which Nature chooses a sequence of price fluctuations under a cumulative quadratic volatility constraint, possibly with jumps, while the Investor makes a sequence of hedging decisions. We show that even in this adversarial, non-stochastic framework, the value of the game converges to the Black-Scholes option price, and the Black-Scholes hedging strategy is near-optimal for the Investor.