Universal lossless codes have been proven to exist , and practical universal lossless coding schemes have been constructed [27, 6]. Given the attractive property of a universal code, namely that it can achieve an asymptotic rate approaching the entropy of the source, without knowledge about the source distribution a priori, a lossy counterpart of a universal code is highly desired. In this work, our objective is to develop a practical one pass lossy compression algorithm that approaches this universality property. We have addressed the problem with the framework of vector quantization (VQ). Traditionally, vector quantization relies on a codebook that has to be "trained" in advance on a set of signals, called the training sequence, generated from the target source. This assumes knowledge of the source distribution a priori. To release this restrictive assumption, we have investigated on methods to adapt the VQ codebook to the actual signal to be coded. In the lossless domain, Ziv and Lempel have introduced a simple scheme for universal data compression, which later gives rise to a whole class of algorithms. This class of algorithms is based on a dynamic dictionary of source sequences parsed as strings, and replacing subsequent occurrences of these strings by pointers. Generalizing such exact string matching to approximate string matching, i.e. one with distortion, we have incorporated the Lempel-Ziv idea into adaptive VQ, and developed a universal lossy compression algorithm. The essense of the adaptive algorithm is the criterion it uses to encode the source vectors and adapt the codebook. A source vector can be simply encoded by a codevector in the current codebook, or it can invoke the addition of a new codevector. Current codevectors can be moved around or even be deleted as the source distribution varies. These actions are taken to minimize the Langrangian R + [lambda]-D, where R is the bit rate involved in taking an action, D is the distortion it introduces, and [lambda] is a parameter chosen to control the operating point of the algorithm on the operational rate-distortion curve. In a sense, bits are "traded" for a reduction in distortion by modifying the codebook. Hence, the technique is called rate-distortion Lempel-Ziv (RDLZ). Experiments are performed on various sources, namely stationary Gaussian sources, "switching" (time-varying) Gaussian sources, standard test images, and composite images. Advantages of adaptation using the RDLZ algorithm are demonstrated in these experiments. When a VQ codebook is mismatched with the actual source signal, significant improvement in rate-distortion performance is achieved with the adaptive algorithm. Empirical evidence on the universality of the algorithm is also given.