While state-of-the-art machine learning models are deep, large-scale, sequential and highly nonconvex, the backbone of modern learning algorithms are simple algorithms such as stochastic gradient descent, gradient descent with momentum or Q-learning (in the case of reinforcement learning tasks). A basic question endures---why do simple algorithms work so well even in these challenging settings?
To answer above question, this thesis focuses on four concrete and fundamental questions: - In nonconvex optimization, can (stochastic) gradient descent or its variants escape saddle points efficiently? - Is gradient descent with momentum provably faster than gradient descent in the general nonconvex setting? - In nonconvex-nonconcave minmax optimization, what is a proper definition of local optima and is gradient descent ascent game-theoretically meaningful? - In reinforcement learning, is Q-learning sample efficient?
This thesis provides the first line of provably positive answers to all above questions. In particular, this thesis will show that although the standard versions of these classical algorithms do not enjoy good theoretical properties in the worst case, simple modifications are sufficient to grant them desirable behaviors, which explain the underlying mechanisms behind their favorable performance in practice.
Title
Machine Learning: Why Do Simple Algorithms Work So Well?
Published
2019-05-17
Full Collection Name
Electrical Engineering & Computer Sciences Technical Reports
Other Identifiers
EECS-2019-53
Type
Text
Extent
159 p
Archive
The Engineering Library
Usage Statement
Researchers may make free and open use of the UC Berkeley Library’s digitized public domain materials. However, some materials in our online collections may be protected by U.S. copyright law (Title 17, U.S.C.). Use or reproduction of materials protected by copyright beyond that allowed by fair use (Title 17, U.S.C. § 107) requires permission from the copyright owners. The use or reproduction of some materials may also be restricted by terms of University of California gift or purchase agreements, privacy and publicity rights, or trademark law. Responsibility for determining rights status and permissibility of any use or reproduction rests exclusively with the researcher. To learn more or make inquiries, please see our permissions policies (https://www.lib.berkeley.edu/about/permissions-policies).