Traditionally, taking experimental measurements of a physical or biological phenomenon was an expensive, laborious and very slow process. However, significant advances in device technologies and computational techniques have sharply reduced the costs of data collection. Capturing thousands of images of developing biological organisms, or recording enormous amounts of video footage from a network of cameras monitoring an observation space, or obtaining a large number of neural measurements of brain signal patterns via non-invasive devices are some of the examples of such data proliferation. Analyzing such large volumes of multi-dimensional data through expert supervision is neither scalable nor cost-effective. In this context, there is a need for systems that complement the expert user by learning meaningful and compact representations from large collections of multidimensional data (images, videos etc.) with minimal supervision. In this dissertation, we present minimally supervised solutions to two such scenarios generally encountered.
The first scenario arises when a large set of labeled noisy observations are available from a given class (or phenotype) with an unknown generative model. An interesting challenge here is to estimate the underlying generative model and the distribution over the distortion parameters that map the observed examples to the generative model. For example, this is the scenario encountered while attempting to construct high-throughput data-driven spatial gene expression atlases from many thousands of noisy images of Drosophila melanogaster imaginal discs. We discuss improvements to an existing information theoretic approach for joint pattern alignment (JPA) in order to address such high-throughput scenarios. Along with the discussion of the assumptions, advantages and limitations of our approach (Chapter 2), we show how this framework can be applied to a variety of applications (Chapters 3, 4, 5).
The second scenario arises when there are observations available from multiple classes (phenotypes) without any labels. An interesting challenge here is to estimate a data driven organizational hierarchy that facilitates efficient retrieval and easy browsing of the observations. For example, this is the scenario encountered while organizing large collections of unlabeled activity videos based on the spatio-temporal patterns, such as actions of human beings, embedded in the videos. We show how some insights from computer vision and data-compression can be efficiently leveraged to provide a high-speed and robust solution to the problem of content-based hierarchy estimation (based on action similarity) for large video collections with minimal user supervision (Chapter 6). We demonstrate the usefulness of our approach on a benchmark dataset of human action videos.
Title
Learning Data Driven Representations from Large Collections of Multidimensional Patterns with Minimal Supervision
Published
2008-08-04
Full Collection Name
Electrical Engineering & Computer Sciences Technical Reports
Other Identifiers
EECS-2008-90
Type
Text
Extent
149 p
Archive
The Engineering Library
Usage Statement
Researchers may make free and open use of the UC Berkeley Library’s digitized public domain materials. However, some materials in our online collections may be protected by U.S. copyright law (Title 17, U.S.C.). Use or reproduction of materials protected by copyright beyond that allowed by fair use (Title 17, U.S.C. § 107) requires permission from the copyright owners. The use or reproduction of some materials may also be restricted by terms of University of California gift or purchase agreements, privacy and publicity rights, or trademark law. Responsibility for determining rights status and permissibility of any use or reproduction rests exclusively with the researcher. To learn more or make inquiries, please see our permissions policies (https://www.lib.berkeley.edu/about/permissions-policies).