PDF

Description

We study the rates of growth of the regret in online convex optimization. First, we show that a simple extension of the algorithm of Hazan (et al.) eliminates the need for a priori knowledge of the lower bound on the second derivatives of the observed functions. We then provide an algorithm, Adaptive Online Gradient Descent, which interpolates between the results of Zinkevich for linear functions and of Hazan (et al.) for strongly convex functions, achieving intermediate rates between sqrtT and log T. Furthermore, we show strong optimality of the algorithm. Finally, we provide an extension of our results to general norms.

Details

Files

Statistics

from
to
Export
Download Full History