Description
The first scenario arises when a large set of labeled noisy observations are available from a given class (or phenotype) with an unknown generative model. An interesting challenge here is to estimate the underlying generative model and the distribution over the distortion parameters that map the observed examples to the generative model. For example, this is the scenario encountered while attempting to construct high-throughput data-driven spatial gene expression atlases from many thousands of noisy images of Drosophila melanogaster imaginal discs. We discuss improvements to an existing information theoretic approach for joint pattern alignment (JPA) in order to address such high-throughput scenarios. Along with the discussion of the assumptions, advantages and limitations of our approach (Chapter 2), we show how this framework can be applied to a variety of applications (Chapters 3, 4, 5).
The second scenario arises when there are observations available from multiple classes (phenotypes) without any labels. An interesting challenge here is to estimate a data driven organizational hierarchy that facilitates efficient retrieval and easy browsing of the observations. For example, this is the scenario encountered while organizing large collections of unlabeled activity videos based on the spatio-temporal patterns, such as actions of human beings, embedded in the videos. We show how some insights from computer vision and data-compression can be efficiently leveraged to provide a high-speed and robust solution to the problem of content-based hierarchy estimation (based on action similarity) for large video collections with minimal user supervision (Chapter 6). We demonstrate the usefulness of our approach on a benchmark dataset of human action videos.