Description
We provide a principled way of proving ~O(sqrt(T)) high-probability guarantees for partial-information (bandit) problems over arbitrary convex decision sets. First, we prove a regret guarantee for the full-information problem in terms of "local" norms, both for entropy and self-concordant barrier regularization, unifying these methods. Given one of such algorithms as a black-box, we can convert a bandit problem into a full-information problem using a sampling scheme. The main result states that a high-probability ~O(sqrt(T)) bound holds whenever the black-box, the sampling scheme, and the estimates of missing information satisfy a number of conditions, which are relatively easy to check. At the heart of the method is a construction of linear upper bounds on confidence intervals. As applications of the main result, we provide the first known efficient algorithm for the sphere with an ~O(sqrt(T)) high-probability bound. We also derive the result for the n-simplex, improving the O(sqrt(nT)log(nT)) bound of Auer et al. by replacing the logT term with loglogT and closing the gap to the lower bound of Omega(sqrt(nT)). While ~O(sqrt(T)) high-probability bounds should hold for general decision sets through our main result, construction of linear upper bounds depends on the particular geometry of the set; we believe that the sphere example already exhibits the necessary ingredients. The guarantees we obtain hold for adaptive adversaries (unlike the in-expectation results of Abernethy et al.) and the algorithms are efficient, given that the linear upper bounds on confidence can be computed.