The concept of sparsity is widely used in the signal processing, machine learning, and statistics communities for model fitting and solving inverse problems. It is also important in neuroscience as it is thought to underlie the neural representations used in the brain. In this thesis, I derive new algorithms for learning higher-order structure in sparse coding models of images, and I present an improved algorithm for inferring sparse representations with sequential observations.

It has been shown that adapting a dictionary of basis functions to the statistics of natural images so as to maximize sparsity in the coefficients results in a set of dictionary elements whose spatial properties resemble those of primary visual cortex receptive fields. The operation to compute the sparse coefficients can be implemented via an l^1-penalized least-square problem commonly referred to as Basis Pursuit Denoising or Lasso. However, the resulting sparse coefficients still exhibit pronounced statistical dependencies, thus violating the independence assumption of the sparse coding model. I propose in this thesis two models that attempt to capture the dependencies among the basis function coefficients. The first model includes a pairwise coupling term in the prior over the coefficient activity states. When adapted to the statistics of natural images, the coupling terms converge to a solution involving facilitatory and inhibitory interactions among neighboring basis functions. In the second model, the prior is a mixture of Laplacian distributions, where the statistical dependencies among the basis function coefficients are modeled through the scale parameters. I show that I can leverage the efficient algorithms developed for Basis Pursuit Denoising to derive improved inference algorithms with the Laplacian scale mixture prior.

I also propose in this thesis a new algorithm, RecLasso, to solve the Lasso with online observations. I introduce an optimization problem that allows us to compute an homotopy from the current solution to the solution after observing a new data point. I compare RecLasso to Lars and Coordinate Descent, and present an application to compressive sensing with sequential observations. I also propose an algorithm to automatically update the regularization parameter after observing a new data point.




Download Full History