Description
In this thesis, we consider the problem that source symbols stream into the encoder in real time and the decoder has to make a decision within a finite delay on a symbol by symbol basis. For a finite delay constraint, a fundamental question is how many channel uses per source symbol are needed to achieve a certain symbol error probability. This question, to our knowledge, has never been systematically studied. We answer the source coding side of the question by studying the asymptotic convergence rate of the symbol error probability as a function of delay -- the delay constrained error exponent.
The technical contributions of this thesis include the following. We derive an upper bound on the delay constrained error exponent for lossless source coding and show the achievability of this bound by using a fixed to variable length coding scheme. We then extend the same treatment to lossy source coding with delay where a tight bound on the error exponent is derived. Both delay constrained error exponents are connected to their block coding counterpart by a "focusing" operator. An achievability result for lossless multiple terminal source coding is then derived in the delay constrained context. Finally, we borrow the genie-aided feed-forward decoder argument from the channel coding literature to derive an upper bound on the delay constrained error exponent for source coding with decoder side-information. This upper bound is strictly smaller than the error exponent for source coding with both encoder and decoder side-information. This "price of ignorance" phenomenon only appears in the streaming with delay setup but not the traditional block coding setup.
These delay constrained error exponents for streaming source coding are generally different from their block coding counterparts. This difference has also been recently observed in the channel coding with feedback literature.